00:00:00.001 Started by upstream project "autotest-per-patch" build number 126106 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.117 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.195 Using shallow fetch with depth 1 00:00:00.195 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.195 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.992 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.006 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.020 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:07.020 > git config core.sparsecheckout # timeout=10 00:00:07.031 > git read-tree -mu HEAD # timeout=10 00:00:07.048 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:07.069 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:07.069 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:07.158 [Pipeline] Start of Pipeline 00:00:07.171 [Pipeline] library 00:00:07.173 Loading library shm_lib@master 00:00:07.173 Library shm_lib@master is cached. Copying from home. 00:00:07.186 [Pipeline] node 00:00:07.199 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.201 [Pipeline] { 00:00:07.211 [Pipeline] catchError 00:00:07.212 [Pipeline] { 00:00:07.221 [Pipeline] wrap 00:00:07.228 [Pipeline] { 00:00:07.236 [Pipeline] stage 00:00:07.237 [Pipeline] { (Prologue) 00:00:07.425 [Pipeline] sh 00:00:07.710 + logger -p user.info -t JENKINS-CI 00:00:07.732 [Pipeline] echo 00:00:07.735 Node: GP11 00:00:07.744 [Pipeline] sh 00:00:08.051 [Pipeline] setCustomBuildProperty 00:00:08.065 [Pipeline] echo 00:00:08.066 Cleanup processes 00:00:08.072 [Pipeline] sh 00:00:08.359 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.359 714495 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.375 [Pipeline] sh 00:00:08.661 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.661 ++ grep -v 'sudo pgrep' 00:00:08.661 ++ awk '{print $1}' 00:00:08.661 + sudo kill -9 00:00:08.661 + true 00:00:08.676 [Pipeline] cleanWs 00:00:08.686 [WS-CLEANUP] Deleting project workspace... 00:00:08.686 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.694 [WS-CLEANUP] done 00:00:08.699 [Pipeline] setCustomBuildProperty 00:00:08.715 [Pipeline] sh 00:00:09.000 + sudo git config --global --replace-all safe.directory '*' 00:00:09.102 [Pipeline] httpRequest 00:00:09.142 [Pipeline] echo 00:00:09.143 Sorcerer 10.211.164.101 is alive 00:00:09.153 [Pipeline] httpRequest 00:00:09.158 HttpMethod: GET 00:00:09.159 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.160 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.175 Response Code: HTTP/1.1 200 OK 00:00:09.176 Success: Status code 200 is in the accepted range: 200,404 00:00:09.176 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:13.586 [Pipeline] sh 00:00:13.873 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:13.890 [Pipeline] httpRequest 00:00:13.922 [Pipeline] echo 00:00:13.924 Sorcerer 10.211.164.101 is alive 00:00:13.935 [Pipeline] httpRequest 00:00:13.941 HttpMethod: GET 00:00:13.942 URL: http://10.211.164.101/packages/spdk_5e8e6dfc28f00b537c15b56c217efbfdbceb92b6.tar.gz 00:00:13.942 Sending request to url: http://10.211.164.101/packages/spdk_5e8e6dfc28f00b537c15b56c217efbfdbceb92b6.tar.gz 00:00:13.953 Response Code: HTTP/1.1 200 OK 00:00:13.953 Success: Status code 200 is in the accepted range: 200,404 00:00:13.954 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5e8e6dfc28f00b537c15b56c217efbfdbceb92b6.tar.gz 00:01:12.859 [Pipeline] sh 00:01:13.145 + tar --no-same-owner -xf spdk_5e8e6dfc28f00b537c15b56c217efbfdbceb92b6.tar.gz 00:01:15.688 [Pipeline] sh 00:01:15.975 + git -C spdk log --oneline -n5 00:01:15.975 5e8e6dfc2 [WIP] lib/ublk: single io_uring per pooler group 00:01:15.975 e97369083 test/ublk: perform ublk creation tests in FD mode 00:01:15.975 c86b521a7 lib/ublk: RPC to change fixed files state 00:01:15.975 4278ac688 lib/ublk: config. option for using IOSQE_FIXED_FILE 00:01:15.975 8ee34f93c nvmf: fix deprecated "transport" check in decode_rpc_listen_address 00:01:15.989 [Pipeline] } 00:01:16.007 [Pipeline] // stage 00:01:16.018 [Pipeline] stage 00:01:16.020 [Pipeline] { (Prepare) 00:01:16.042 [Pipeline] writeFile 00:01:16.063 [Pipeline] sh 00:01:16.351 + logger -p user.info -t JENKINS-CI 00:01:16.364 [Pipeline] sh 00:01:16.648 + logger -p user.info -t JENKINS-CI 00:01:16.661 [Pipeline] sh 00:01:16.945 + cat autorun-spdk.conf 00:01:16.945 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.945 SPDK_TEST_NVMF=1 00:01:16.945 SPDK_TEST_NVME_CLI=1 00:01:16.945 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.945 SPDK_TEST_NVMF_NICS=e810 00:01:16.945 SPDK_TEST_VFIOUSER=1 00:01:16.945 SPDK_RUN_UBSAN=1 00:01:16.945 NET_TYPE=phy 00:01:16.953 RUN_NIGHTLY=0 00:01:16.959 [Pipeline] readFile 00:01:16.990 [Pipeline] withEnv 00:01:16.992 [Pipeline] { 00:01:17.008 [Pipeline] sh 00:01:17.294 + set -ex 00:01:17.294 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.294 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.294 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.294 ++ SPDK_TEST_NVMF=1 00:01:17.294 ++ SPDK_TEST_NVME_CLI=1 00:01:17.294 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.294 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.294 ++ SPDK_TEST_VFIOUSER=1 00:01:17.294 ++ SPDK_RUN_UBSAN=1 00:01:17.294 ++ NET_TYPE=phy 00:01:17.294 ++ RUN_NIGHTLY=0 00:01:17.294 + case $SPDK_TEST_NVMF_NICS in 00:01:17.294 + DRIVERS=ice 00:01:17.294 + [[ tcp == \r\d\m\a ]] 00:01:17.294 + [[ -n ice ]] 00:01:17.294 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.294 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.294 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.294 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.294 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.294 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.294 + true 00:01:17.294 + for D in $DRIVERS 00:01:17.294 + sudo modprobe ice 00:01:17.294 + exit 0 00:01:17.305 [Pipeline] } 00:01:17.326 [Pipeline] // withEnv 00:01:17.331 [Pipeline] } 00:01:17.344 [Pipeline] // stage 00:01:17.355 [Pipeline] catchError 00:01:17.356 [Pipeline] { 00:01:17.367 [Pipeline] timeout 00:01:17.367 Timeout set to expire in 50 min 00:01:17.368 [Pipeline] { 00:01:17.380 [Pipeline] stage 00:01:17.381 [Pipeline] { (Tests) 00:01:17.396 [Pipeline] sh 00:01:17.682 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.682 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.682 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.682 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.682 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.682 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.682 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.682 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.682 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.682 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.682 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.682 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.682 + source /etc/os-release 00:01:17.682 ++ NAME='Fedora Linux' 00:01:17.682 ++ VERSION='38 (Cloud Edition)' 00:01:17.682 ++ ID=fedora 00:01:17.682 ++ VERSION_ID=38 00:01:17.682 ++ VERSION_CODENAME= 00:01:17.682 ++ PLATFORM_ID=platform:f38 00:01:17.682 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:17.682 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.682 ++ LOGO=fedora-logo-icon 00:01:17.682 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:17.682 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.682 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:17.682 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.682 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.682 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.682 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:17.682 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.682 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:17.682 ++ SUPPORT_END=2024-05-14 00:01:17.682 ++ VARIANT='Cloud Edition' 00:01:17.682 ++ VARIANT_ID=cloud 00:01:17.682 + uname -a 00:01:17.682 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:17.682 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.620 Hugepages 00:01:18.620 node hugesize free / total 00:01:18.620 node0 1048576kB 0 / 0 00:01:18.620 node0 2048kB 0 / 0 00:01:18.620 node1 1048576kB 0 / 0 00:01:18.620 node1 2048kB 0 / 0 00:01:18.620 00:01:18.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.620 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:18.620 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:18.620 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:18.620 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:18.620 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:18.620 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:18.621 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:18.621 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:18.621 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:18.621 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:18.879 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:18.879 + rm -f /tmp/spdk-ld-path 00:01:18.879 + source autorun-spdk.conf 00:01:18.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.879 ++ SPDK_TEST_NVMF=1 00:01:18.879 ++ SPDK_TEST_NVME_CLI=1 00:01:18.879 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.879 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.879 ++ SPDK_TEST_VFIOUSER=1 00:01:18.879 ++ SPDK_RUN_UBSAN=1 00:01:18.879 ++ NET_TYPE=phy 00:01:18.879 ++ RUN_NIGHTLY=0 00:01:18.879 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.879 + [[ -n '' ]] 00:01:18.879 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.879 + for M in /var/spdk/build-*-manifest.txt 00:01:18.879 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.879 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.879 + for M in /var/spdk/build-*-manifest.txt 00:01:18.879 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.879 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.879 ++ uname 00:01:18.879 + [[ Linux == \L\i\n\u\x ]] 00:01:18.879 + sudo dmesg -T 00:01:18.879 + sudo dmesg --clear 00:01:18.879 + dmesg_pid=715169 00:01:18.879 + [[ Fedora Linux == FreeBSD ]] 00:01:18.879 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.879 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.879 + sudo dmesg -Tw 00:01:18.879 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.879 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.879 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.879 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.880 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.880 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.880 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.880 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.880 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.880 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.880 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.880 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.880 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.880 Test configuration: 00:01:18.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.880 SPDK_TEST_NVMF=1 00:01:18.880 SPDK_TEST_NVME_CLI=1 00:01:18.880 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.880 SPDK_TEST_NVMF_NICS=e810 00:01:18.880 SPDK_TEST_VFIOUSER=1 00:01:18.880 SPDK_RUN_UBSAN=1 00:01:18.880 NET_TYPE=phy 00:01:18.880 RUN_NIGHTLY=0 11:38:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.880 11:38:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.880 11:38:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.880 11:38:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.880 11:38:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.880 11:38:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.880 11:38:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.880 11:38:08 -- paths/export.sh@5 -- $ export PATH 00:01:18.880 11:38:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.880 11:38:08 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.880 11:38:08 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:18.880 11:38:08 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720777088.XXXXXX 00:01:18.880 11:38:08 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720777088.05BE93 00:01:18.880 11:38:08 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:18.880 11:38:08 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:18.880 11:38:08 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:18.880 11:38:08 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.880 11:38:08 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.880 11:38:08 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:18.880 11:38:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:18.880 11:38:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.880 11:38:08 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:18.880 11:38:08 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:18.880 11:38:08 -- pm/common@17 -- $ local monitor 00:01:18.880 11:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.880 11:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.880 11:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.880 11:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.880 11:38:08 -- pm/common@21 -- $ date +%s 00:01:18.880 11:38:08 -- pm/common@21 -- $ date +%s 00:01:18.880 11:38:08 -- pm/common@25 -- $ sleep 1 00:01:18.880 11:38:08 -- pm/common@21 -- $ date +%s 00:01:18.880 11:38:08 -- pm/common@21 -- $ date +%s 00:01:18.880 11:38:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720777088 00:01:18.880 11:38:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720777088 00:01:18.880 11:38:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720777088 00:01:18.880 11:38:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720777088 00:01:18.880 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720777088_collect-vmstat.pm.log 00:01:18.880 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720777088_collect-cpu-load.pm.log 00:01:18.880 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720777088_collect-cpu-temp.pm.log 00:01:18.880 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720777088_collect-bmc-pm.bmc.pm.log 00:01:19.818 11:38:09 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:19.818 11:38:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.818 11:38:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.818 11:38:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.818 11:38:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.818 Fri Jul 12 09:38:09 AM UTC 2024 00:01:19.818 11:38:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.818 v24.09-pre-51-g5e8e6dfc2 00:01:19.818 11:38:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.818 11:38:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.818 11:38:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.818 11:38:09 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:19.818 11:38:09 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:19.818 11:38:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.077 ************************************ 00:01:20.077 START TEST ubsan 00:01:20.077 ************************************ 00:01:20.077 11:38:09 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:20.077 using ubsan 00:01:20.077 00:01:20.077 real 0m0.000s 00:01:20.077 user 0m0.000s 00:01:20.077 sys 0m0.000s 00:01:20.077 11:38:09 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:20.077 11:38:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.077 ************************************ 00:01:20.077 END TEST ubsan 00:01:20.077 ************************************ 00:01:20.077 11:38:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.077 11:38:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.077 11:38:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.077 11:38:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.077 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.077 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:20.337 Using 'verbs' RDMA provider 00:01:30.886 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.895 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.895 Creating mk/config.mk...done. 00:01:40.895 Creating mk/cc.flags.mk...done. 00:01:40.895 Type 'make' to build. 00:01:40.895 11:38:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:40.895 11:38:30 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:40.895 11:38:30 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:40.895 11:38:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.895 ************************************ 00:01:40.895 START TEST make 00:01:40.895 ************************************ 00:01:40.895 11:38:30 make -- common/autotest_common.sh@1124 -- $ make -j48 00:01:40.895 make[1]: Nothing to be done for 'all'. 00:01:42.812 The Meson build system 00:01:42.812 Version: 1.3.1 00:01:42.812 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.812 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.812 Build type: native build 00:01:42.812 Project name: libvfio-user 00:01:42.812 Project version: 0.0.1 00:01:42.812 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.812 C linker for the host machine: cc ld.bfd 2.39-16 00:01:42.812 Host machine cpu family: x86_64 00:01:42.812 Host machine cpu: x86_64 00:01:42.812 Run-time dependency threads found: YES 00:01:42.812 Library dl found: YES 00:01:42.812 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.812 Run-time dependency json-c found: YES 0.17 00:01:42.812 Run-time dependency cmocka found: YES 1.1.7 00:01:42.812 Program pytest-3 found: NO 00:01:42.812 Program flake8 found: NO 00:01:42.812 Program misspell-fixer found: NO 00:01:42.812 Program restructuredtext-lint found: NO 00:01:42.812 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.812 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.812 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.812 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.812 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.812 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.812 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.812 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.812 Build targets in project: 8 00:01:42.812 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.812 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.812 00:01:42.812 libvfio-user 0.0.1 00:01:42.812 00:01:42.812 User defined options 00:01:42.812 buildtype : debug 00:01:42.812 default_library: shared 00:01:42.812 libdir : /usr/local/lib 00:01:42.812 00:01:42.812 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.396 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.396 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.396 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.396 [3/37] Compiling C object samples/null.p/null.c.o 00:01:43.396 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.396 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.396 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.396 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.653 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.653 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.653 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.653 [11/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.653 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.653 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.653 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.653 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.653 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.653 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.653 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.653 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.653 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.653 [21/37] Compiling C object samples/server.p/server.c.o 00:01:43.653 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.653 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.653 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.653 [25/37] Compiling C object samples/client.p/client.c.o 00:01:43.653 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.653 [27/37] Linking target samples/client 00:01:43.915 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.915 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.915 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.915 [31/37] Linking target test/unit_tests 00:01:44.175 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.175 [33/37] Linking target samples/server 00:01:44.175 [34/37] Linking target samples/null 00:01:44.175 [35/37] Linking target samples/lspci 00:01:44.175 [36/37] Linking target samples/gpio-pci-idio-16 00:01:44.175 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:44.175 INFO: autodetecting backend as ninja 00:01:44.175 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.175 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.116 ninja: no work to do. 00:01:49.303 The Meson build system 00:01:49.303 Version: 1.3.1 00:01:49.303 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:49.303 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:49.303 Build type: native build 00:01:49.303 Program cat found: YES (/usr/bin/cat) 00:01:49.303 Project name: DPDK 00:01:49.303 Project version: 24.03.0 00:01:49.303 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.303 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.303 Host machine cpu family: x86_64 00:01:49.303 Host machine cpu: x86_64 00:01:49.303 Message: ## Building in Developer Mode ## 00:01:49.303 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.303 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:49.303 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.303 Program python3 found: YES (/usr/bin/python3) 00:01:49.303 Program cat found: YES (/usr/bin/cat) 00:01:49.303 Compiler for C supports arguments -march=native: YES 00:01:49.303 Checking for size of "void *" : 8 00:01:49.303 Checking for size of "void *" : 8 (cached) 00:01:49.303 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:49.303 Library m found: YES 00:01:49.303 Library numa found: YES 00:01:49.303 Has header "numaif.h" : YES 00:01:49.303 Library fdt found: NO 00:01:49.303 Library execinfo found: NO 00:01:49.303 Has header "execinfo.h" : YES 00:01:49.303 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.303 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.303 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.303 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.303 Run-time dependency openssl found: YES 3.0.9 00:01:49.303 Run-time dependency libpcap found: YES 1.10.4 00:01:49.303 Has header "pcap.h" with dependency libpcap: YES 00:01:49.303 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.303 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.303 Compiler for C supports arguments -Wformat: YES 00:01:49.303 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.303 Compiler for C supports arguments -Wformat-security: NO 00:01:49.303 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.303 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.303 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.303 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.303 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.303 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.303 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.303 Compiler for C supports arguments -Wundef: YES 00:01:49.303 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.303 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.303 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.303 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.303 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.303 Program objdump found: YES (/usr/bin/objdump) 00:01:49.303 Compiler for C supports arguments -mavx512f: YES 00:01:49.303 Checking if "AVX512 checking" compiles: YES 00:01:49.303 Fetching value of define "__SSE4_2__" : 1 00:01:49.303 Fetching value of define "__AES__" : 1 00:01:49.303 Fetching value of define "__AVX__" : 1 00:01:49.303 Fetching value of define "__AVX2__" : (undefined) 00:01:49.303 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.303 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.303 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.303 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.303 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.303 Fetching value of define "__PCLMUL__" : 1 00:01:49.303 Fetching value of define "__RDRND__" : 1 00:01:49.303 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.303 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.303 Fetching value of define "__znver1__" : (undefined) 00:01:49.303 Fetching value of define "__znver2__" : (undefined) 00:01:49.303 Fetching value of define "__znver3__" : (undefined) 00:01:49.303 Fetching value of define "__znver4__" : (undefined) 00:01:49.303 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.303 Message: lib/log: Defining dependency "log" 00:01:49.303 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.303 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.303 Checking for function "getentropy" : NO 00:01:49.303 Message: lib/eal: Defining dependency "eal" 00:01:49.303 Message: lib/ring: Defining dependency "ring" 00:01:49.303 Message: lib/rcu: Defining dependency "rcu" 00:01:49.303 Message: lib/mempool: Defining dependency "mempool" 00:01:49.303 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.303 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.303 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.303 Compiler for C supports arguments -mpclmul: YES 00:01:49.303 Compiler for C supports arguments -maes: YES 00:01:49.303 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.303 Compiler for C supports arguments -mavx512bw: YES 00:01:49.303 Compiler for C supports arguments -mavx512dq: YES 00:01:49.303 Compiler for C supports arguments -mavx512vl: YES 00:01:49.303 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.303 Compiler for C supports arguments -mavx2: YES 00:01:49.303 Compiler for C supports arguments -mavx: YES 00:01:49.303 Message: lib/net: Defining dependency "net" 00:01:49.303 Message: lib/meter: Defining dependency "meter" 00:01:49.303 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.303 Message: lib/pci: Defining dependency "pci" 00:01:49.303 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.303 Message: lib/hash: Defining dependency "hash" 00:01:49.303 Message: lib/timer: Defining dependency "timer" 00:01:49.303 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.303 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.303 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.303 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.303 Message: lib/power: Defining dependency "power" 00:01:49.303 Message: lib/reorder: Defining dependency "reorder" 00:01:49.303 Message: lib/security: Defining dependency "security" 00:01:49.303 Has header "linux/userfaultfd.h" : YES 00:01:49.303 Has header "linux/vduse.h" : YES 00:01:49.303 Message: lib/vhost: Defining dependency "vhost" 00:01:49.303 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.303 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.303 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.303 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.303 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:49.303 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:49.303 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:49.303 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:49.303 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:49.304 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:49.304 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.304 Configuring doxy-api-html.conf using configuration 00:01:49.304 Configuring doxy-api-man.conf using configuration 00:01:49.304 Program mandb found: YES (/usr/bin/mandb) 00:01:49.304 Program sphinx-build found: NO 00:01:49.304 Configuring rte_build_config.h using configuration 00:01:49.304 Message: 00:01:49.304 ================= 00:01:49.304 Applications Enabled 00:01:49.304 ================= 00:01:49.304 00:01:49.304 apps: 00:01:49.304 00:01:49.304 00:01:49.304 Message: 00:01:49.304 ================= 00:01:49.304 Libraries Enabled 00:01:49.304 ================= 00:01:49.304 00:01:49.304 libs: 00:01:49.304 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:49.304 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:49.304 cryptodev, dmadev, power, reorder, security, vhost, 00:01:49.304 00:01:49.304 Message: 00:01:49.304 =============== 00:01:49.304 Drivers Enabled 00:01:49.304 =============== 00:01:49.304 00:01:49.304 common: 00:01:49.304 00:01:49.304 bus: 00:01:49.304 pci, vdev, 00:01:49.304 mempool: 00:01:49.304 ring, 00:01:49.304 dma: 00:01:49.304 00:01:49.304 net: 00:01:49.304 00:01:49.304 crypto: 00:01:49.304 00:01:49.304 compress: 00:01:49.304 00:01:49.304 vdpa: 00:01:49.304 00:01:49.304 00:01:49.304 Message: 00:01:49.304 ================= 00:01:49.304 Content Skipped 00:01:49.304 ================= 00:01:49.304 00:01:49.304 apps: 00:01:49.304 dumpcap: explicitly disabled via build config 00:01:49.304 graph: explicitly disabled via build config 00:01:49.304 pdump: explicitly disabled via build config 00:01:49.304 proc-info: explicitly disabled via build config 00:01:49.304 test-acl: explicitly disabled via build config 00:01:49.304 test-bbdev: explicitly disabled via build config 00:01:49.304 test-cmdline: explicitly disabled via build config 00:01:49.304 test-compress-perf: explicitly disabled via build config 00:01:49.304 test-crypto-perf: explicitly disabled via build config 00:01:49.304 test-dma-perf: explicitly disabled via build config 00:01:49.304 test-eventdev: explicitly disabled via build config 00:01:49.304 test-fib: explicitly disabled via build config 00:01:49.304 test-flow-perf: explicitly disabled via build config 00:01:49.304 test-gpudev: explicitly disabled via build config 00:01:49.304 test-mldev: explicitly disabled via build config 00:01:49.304 test-pipeline: explicitly disabled via build config 00:01:49.304 test-pmd: explicitly disabled via build config 00:01:49.304 test-regex: explicitly disabled via build config 00:01:49.304 test-sad: explicitly disabled via build config 00:01:49.304 test-security-perf: explicitly disabled via build config 00:01:49.304 00:01:49.304 libs: 00:01:49.304 argparse: explicitly disabled via build config 00:01:49.304 metrics: explicitly disabled via build config 00:01:49.304 acl: explicitly disabled via build config 00:01:49.304 bbdev: explicitly disabled via build config 00:01:49.304 bitratestats: explicitly disabled via build config 00:01:49.304 bpf: explicitly disabled via build config 00:01:49.304 cfgfile: explicitly disabled via build config 00:01:49.304 distributor: explicitly disabled via build config 00:01:49.304 efd: explicitly disabled via build config 00:01:49.304 eventdev: explicitly disabled via build config 00:01:49.304 dispatcher: explicitly disabled via build config 00:01:49.304 gpudev: explicitly disabled via build config 00:01:49.304 gro: explicitly disabled via build config 00:01:49.304 gso: explicitly disabled via build config 00:01:49.304 ip_frag: explicitly disabled via build config 00:01:49.304 jobstats: explicitly disabled via build config 00:01:49.304 latencystats: explicitly disabled via build config 00:01:49.304 lpm: explicitly disabled via build config 00:01:49.304 member: explicitly disabled via build config 00:01:49.304 pcapng: explicitly disabled via build config 00:01:49.304 rawdev: explicitly disabled via build config 00:01:49.304 regexdev: explicitly disabled via build config 00:01:49.304 mldev: explicitly disabled via build config 00:01:49.304 rib: explicitly disabled via build config 00:01:49.304 sched: explicitly disabled via build config 00:01:49.304 stack: explicitly disabled via build config 00:01:49.304 ipsec: explicitly disabled via build config 00:01:49.304 pdcp: explicitly disabled via build config 00:01:49.304 fib: explicitly disabled via build config 00:01:49.304 port: explicitly disabled via build config 00:01:49.304 pdump: explicitly disabled via build config 00:01:49.304 table: explicitly disabled via build config 00:01:49.304 pipeline: explicitly disabled via build config 00:01:49.304 graph: explicitly disabled via build config 00:01:49.304 node: explicitly disabled via build config 00:01:49.304 00:01:49.304 drivers: 00:01:49.304 common/cpt: not in enabled drivers build config 00:01:49.304 common/dpaax: not in enabled drivers build config 00:01:49.304 common/iavf: not in enabled drivers build config 00:01:49.304 common/idpf: not in enabled drivers build config 00:01:49.304 common/ionic: not in enabled drivers build config 00:01:49.304 common/mvep: not in enabled drivers build config 00:01:49.304 common/octeontx: not in enabled drivers build config 00:01:49.304 bus/auxiliary: not in enabled drivers build config 00:01:49.304 bus/cdx: not in enabled drivers build config 00:01:49.304 bus/dpaa: not in enabled drivers build config 00:01:49.304 bus/fslmc: not in enabled drivers build config 00:01:49.304 bus/ifpga: not in enabled drivers build config 00:01:49.304 bus/platform: not in enabled drivers build config 00:01:49.304 bus/uacce: not in enabled drivers build config 00:01:49.304 bus/vmbus: not in enabled drivers build config 00:01:49.304 common/cnxk: not in enabled drivers build config 00:01:49.304 common/mlx5: not in enabled drivers build config 00:01:49.304 common/nfp: not in enabled drivers build config 00:01:49.304 common/nitrox: not in enabled drivers build config 00:01:49.304 common/qat: not in enabled drivers build config 00:01:49.304 common/sfc_efx: not in enabled drivers build config 00:01:49.304 mempool/bucket: not in enabled drivers build config 00:01:49.304 mempool/cnxk: not in enabled drivers build config 00:01:49.304 mempool/dpaa: not in enabled drivers build config 00:01:49.304 mempool/dpaa2: not in enabled drivers build config 00:01:49.304 mempool/octeontx: not in enabled drivers build config 00:01:49.304 mempool/stack: not in enabled drivers build config 00:01:49.304 dma/cnxk: not in enabled drivers build config 00:01:49.304 dma/dpaa: not in enabled drivers build config 00:01:49.304 dma/dpaa2: not in enabled drivers build config 00:01:49.304 dma/hisilicon: not in enabled drivers build config 00:01:49.304 dma/idxd: not in enabled drivers build config 00:01:49.304 dma/ioat: not in enabled drivers build config 00:01:49.304 dma/skeleton: not in enabled drivers build config 00:01:49.304 net/af_packet: not in enabled drivers build config 00:01:49.304 net/af_xdp: not in enabled drivers build config 00:01:49.304 net/ark: not in enabled drivers build config 00:01:49.304 net/atlantic: not in enabled drivers build config 00:01:49.304 net/avp: not in enabled drivers build config 00:01:49.304 net/axgbe: not in enabled drivers build config 00:01:49.304 net/bnx2x: not in enabled drivers build config 00:01:49.304 net/bnxt: not in enabled drivers build config 00:01:49.304 net/bonding: not in enabled drivers build config 00:01:49.304 net/cnxk: not in enabled drivers build config 00:01:49.304 net/cpfl: not in enabled drivers build config 00:01:49.304 net/cxgbe: not in enabled drivers build config 00:01:49.304 net/dpaa: not in enabled drivers build config 00:01:49.304 net/dpaa2: not in enabled drivers build config 00:01:49.304 net/e1000: not in enabled drivers build config 00:01:49.304 net/ena: not in enabled drivers build config 00:01:49.304 net/enetc: not in enabled drivers build config 00:01:49.304 net/enetfec: not in enabled drivers build config 00:01:49.304 net/enic: not in enabled drivers build config 00:01:49.304 net/failsafe: not in enabled drivers build config 00:01:49.304 net/fm10k: not in enabled drivers build config 00:01:49.304 net/gve: not in enabled drivers build config 00:01:49.304 net/hinic: not in enabled drivers build config 00:01:49.304 net/hns3: not in enabled drivers build config 00:01:49.304 net/i40e: not in enabled drivers build config 00:01:49.304 net/iavf: not in enabled drivers build config 00:01:49.304 net/ice: not in enabled drivers build config 00:01:49.304 net/idpf: not in enabled drivers build config 00:01:49.304 net/igc: not in enabled drivers build config 00:01:49.304 net/ionic: not in enabled drivers build config 00:01:49.304 net/ipn3ke: not in enabled drivers build config 00:01:49.304 net/ixgbe: not in enabled drivers build config 00:01:49.304 net/mana: not in enabled drivers build config 00:01:49.304 net/memif: not in enabled drivers build config 00:01:49.304 net/mlx4: not in enabled drivers build config 00:01:49.304 net/mlx5: not in enabled drivers build config 00:01:49.304 net/mvneta: not in enabled drivers build config 00:01:49.304 net/mvpp2: not in enabled drivers build config 00:01:49.304 net/netvsc: not in enabled drivers build config 00:01:49.304 net/nfb: not in enabled drivers build config 00:01:49.304 net/nfp: not in enabled drivers build config 00:01:49.304 net/ngbe: not in enabled drivers build config 00:01:49.304 net/null: not in enabled drivers build config 00:01:49.304 net/octeontx: not in enabled drivers build config 00:01:49.304 net/octeon_ep: not in enabled drivers build config 00:01:49.304 net/pcap: not in enabled drivers build config 00:01:49.304 net/pfe: not in enabled drivers build config 00:01:49.304 net/qede: not in enabled drivers build config 00:01:49.304 net/ring: not in enabled drivers build config 00:01:49.304 net/sfc: not in enabled drivers build config 00:01:49.304 net/softnic: not in enabled drivers build config 00:01:49.304 net/tap: not in enabled drivers build config 00:01:49.304 net/thunderx: not in enabled drivers build config 00:01:49.304 net/txgbe: not in enabled drivers build config 00:01:49.304 net/vdev_netvsc: not in enabled drivers build config 00:01:49.304 net/vhost: not in enabled drivers build config 00:01:49.304 net/virtio: not in enabled drivers build config 00:01:49.304 net/vmxnet3: not in enabled drivers build config 00:01:49.304 raw/*: missing internal dependency, "rawdev" 00:01:49.304 crypto/armv8: not in enabled drivers build config 00:01:49.304 crypto/bcmfs: not in enabled drivers build config 00:01:49.304 crypto/caam_jr: not in enabled drivers build config 00:01:49.304 crypto/ccp: not in enabled drivers build config 00:01:49.304 crypto/cnxk: not in enabled drivers build config 00:01:49.304 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.304 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.304 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.304 crypto/mlx5: not in enabled drivers build config 00:01:49.304 crypto/mvsam: not in enabled drivers build config 00:01:49.304 crypto/nitrox: not in enabled drivers build config 00:01:49.304 crypto/null: not in enabled drivers build config 00:01:49.304 crypto/octeontx: not in enabled drivers build config 00:01:49.304 crypto/openssl: not in enabled drivers build config 00:01:49.304 crypto/scheduler: not in enabled drivers build config 00:01:49.304 crypto/uadk: not in enabled drivers build config 00:01:49.304 crypto/virtio: not in enabled drivers build config 00:01:49.304 compress/isal: not in enabled drivers build config 00:01:49.304 compress/mlx5: not in enabled drivers build config 00:01:49.304 compress/nitrox: not in enabled drivers build config 00:01:49.304 compress/octeontx: not in enabled drivers build config 00:01:49.304 compress/zlib: not in enabled drivers build config 00:01:49.304 regex/*: missing internal dependency, "regexdev" 00:01:49.304 ml/*: missing internal dependency, "mldev" 00:01:49.304 vdpa/ifc: not in enabled drivers build config 00:01:49.304 vdpa/mlx5: not in enabled drivers build config 00:01:49.304 vdpa/nfp: not in enabled drivers build config 00:01:49.304 vdpa/sfc: not in enabled drivers build config 00:01:49.304 event/*: missing internal dependency, "eventdev" 00:01:49.304 baseband/*: missing internal dependency, "bbdev" 00:01:49.304 gpu/*: missing internal dependency, "gpudev" 00:01:49.304 00:01:49.304 00:01:49.563 Build targets in project: 85 00:01:49.563 00:01:49.563 DPDK 24.03.0 00:01:49.563 00:01:49.563 User defined options 00:01:49.563 buildtype : debug 00:01:49.563 default_library : shared 00:01:49.563 libdir : lib 00:01:49.563 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.563 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:49.563 c_link_args : 00:01:49.563 cpu_instruction_set: native 00:01:49.563 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:49.563 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:49.563 enable_docs : false 00:01:49.563 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:49.563 enable_kmods : false 00:01:49.563 tests : false 00:01:49.563 00:01:49.563 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.135 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.135 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.135 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.135 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.135 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.135 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.135 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.135 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.135 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.135 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.135 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.135 [11/268] Linking static target lib/librte_kvargs.a 00:01:50.135 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.135 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.135 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.135 [15/268] Linking static target lib/librte_log.a 00:01:50.397 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.970 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.970 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.970 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.970 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.970 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.970 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.970 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.970 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.970 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.970 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.970 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.970 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.970 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.970 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.970 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.970 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.970 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.970 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.970 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.970 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.970 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.970 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.970 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.970 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.970 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.232 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.232 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.232 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.232 [45/268] Linking static target lib/librte_telemetry.a 00:01:51.232 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.232 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.232 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.232 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.232 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.232 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.232 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.232 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.232 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.232 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.232 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.233 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.233 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.233 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.233 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.233 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.233 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.233 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.493 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.493 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.493 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.493 [67/268] Linking target lib/librte_log.so.24.1 00:01:51.493 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.493 [69/268] Linking static target lib/librte_pci.a 00:01:51.757 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.757 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.757 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.757 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.757 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.017 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.017 [76/268] Linking target lib/librte_kvargs.so.24.1 00:01:52.017 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.017 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.017 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.017 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.017 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.017 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.017 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.017 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.017 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.017 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.017 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.017 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.017 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.017 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.017 [91/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.017 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.017 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.017 [94/268] Linking static target lib/librte_meter.a 00:01:52.017 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.017 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.017 [97/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.017 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.017 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.279 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.279 [101/268] Linking target lib/librte_telemetry.so.24.1 00:01:52.279 [102/268] Linking static target lib/librte_ring.a 00:01:52.279 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.279 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.279 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.279 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.279 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.279 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.279 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.279 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.279 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.279 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.279 [113/268] Linking static target lib/librte_eal.a 00:01:52.279 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.279 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.279 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.279 [117/268] Linking static target lib/librte_mempool.a 00:01:52.279 [118/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.279 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.279 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.279 [121/268] Linking static target lib/librte_rcu.a 00:01:52.279 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.279 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.279 [124/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.279 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.279 [126/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.542 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.542 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.542 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.542 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.542 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.542 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.542 [133/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.542 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.542 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.542 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.542 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.542 [138/268] Linking static target lib/librte_net.a 00:01:52.806 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.806 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.806 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.806 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.806 [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.806 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.806 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.066 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.066 [147/268] Linking static target lib/librte_cmdline.a 00:01:53.066 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.066 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.066 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.066 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.066 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.325 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.325 [154/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.325 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.325 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.325 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.325 [158/268] Linking static target lib/librte_timer.a 00:01:53.325 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.325 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.325 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.325 [162/268] Linking static target lib/librte_dmadev.a 00:01:53.325 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.325 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.325 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.325 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.325 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.583 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.583 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.583 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.583 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.583 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.583 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.583 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.583 [175/268] Linking static target lib/librte_power.a 00:01:53.583 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.583 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.583 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.583 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.583 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.583 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.583 [182/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.583 [183/268] Linking static target lib/librte_compressdev.a 00:01:53.583 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.583 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.841 [186/268] Linking static target lib/librte_hash.a 00:01:53.841 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.841 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.841 [189/268] Linking static target lib/librte_reorder.a 00:01:53.841 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.841 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.841 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.841 [193/268] Linking static target lib/librte_mbuf.a 00:01:53.841 [194/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.841 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.841 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.841 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.841 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.841 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.841 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:53.841 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.841 [202/268] Linking static target lib/librte_security.a 00:01:54.099 [203/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.099 [204/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.099 [205/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.099 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.099 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.099 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.099 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.099 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.099 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.099 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.099 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:54.099 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.356 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.356 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.356 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.356 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.356 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.356 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.356 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.356 [222/268] Linking static target lib/librte_cryptodev.a 00:01:54.356 [223/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.356 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.356 [225/268] Linking static target lib/librte_ethdev.a 00:01:54.614 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.547 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.919 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.846 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.846 [230/268] Linking target lib/librte_eal.so.24.1 00:01:58.846 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.846 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.846 [233/268] Linking target lib/librte_ring.so.24.1 00:01:58.846 [234/268] Linking target lib/librte_meter.so.24.1 00:01:58.846 [235/268] Linking target lib/librte_pci.so.24.1 00:01:58.846 [236/268] Linking target lib/librte_timer.so.24.1 00:01:58.846 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:58.846 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.846 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.846 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.846 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.846 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.846 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.846 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:58.846 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:58.846 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.104 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.104 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.104 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.104 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:59.104 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.104 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:59.104 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:59.104 [254/268] Linking target lib/librte_net.so.24.1 00:01:59.104 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:59.362 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.362 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.362 [258/268] Linking target lib/librte_hash.so.24.1 00:01:59.362 [259/268] Linking target lib/librte_security.so.24.1 00:01:59.362 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.362 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.620 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.620 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.620 [264/268] Linking target lib/librte_power.so.24.1 00:02:02.154 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.154 [266/268] Linking static target lib/librte_vhost.a 00:02:03.090 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.090 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:03.090 INFO: autodetecting backend as ninja 00:02:03.090 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:04.022 CC lib/ut/ut.o 00:02:04.022 CC lib/log/log.o 00:02:04.022 CC lib/ut_mock/mock.o 00:02:04.022 CC lib/log/log_flags.o 00:02:04.022 CC lib/log/log_deprecated.o 00:02:04.022 LIB libspdk_log.a 00:02:04.022 LIB libspdk_ut_mock.a 00:02:04.022 LIB libspdk_ut.a 00:02:04.022 SO libspdk_ut.so.2.0 00:02:04.022 SO libspdk_ut_mock.so.6.0 00:02:04.022 SO libspdk_log.so.7.0 00:02:04.022 SYMLINK libspdk_ut.so 00:02:04.022 SYMLINK libspdk_ut_mock.so 00:02:04.022 SYMLINK libspdk_log.so 00:02:04.280 CXX lib/trace_parser/trace.o 00:02:04.280 CC lib/ioat/ioat.o 00:02:04.280 CC lib/dma/dma.o 00:02:04.280 CC lib/util/bit_array.o 00:02:04.280 CC lib/util/base64.o 00:02:04.280 CC lib/util/cpuset.o 00:02:04.280 CC lib/util/crc16.o 00:02:04.280 CC lib/util/crc32.o 00:02:04.280 CC lib/util/crc32c.o 00:02:04.280 CC lib/util/crc32_ieee.o 00:02:04.280 CC lib/util/crc64.o 00:02:04.280 CC lib/util/dif.o 00:02:04.280 CC lib/util/fd.o 00:02:04.280 CC lib/util/file.o 00:02:04.280 CC lib/util/hexlify.o 00:02:04.280 CC lib/util/iov.o 00:02:04.280 CC lib/util/math.o 00:02:04.280 CC lib/util/pipe.o 00:02:04.280 CC lib/util/strerror_tls.o 00:02:04.280 CC lib/util/string.o 00:02:04.280 CC lib/util/uuid.o 00:02:04.280 CC lib/util/fd_group.o 00:02:04.280 CC lib/util/xor.o 00:02:04.280 CC lib/util/zipf.o 00:02:04.280 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.280 CC lib/vfio_user/host/vfio_user.o 00:02:04.538 LIB libspdk_dma.a 00:02:04.538 SO libspdk_dma.so.4.0 00:02:04.538 SYMLINK libspdk_dma.so 00:02:04.538 LIB libspdk_ioat.a 00:02:04.538 SO libspdk_ioat.so.7.0 00:02:04.795 SYMLINK libspdk_ioat.so 00:02:04.795 LIB libspdk_vfio_user.a 00:02:04.795 SO libspdk_vfio_user.so.5.0 00:02:04.795 SYMLINK libspdk_vfio_user.so 00:02:04.795 LIB libspdk_util.a 00:02:04.795 SO libspdk_util.so.9.0 00:02:05.053 SYMLINK libspdk_util.so 00:02:05.310 CC lib/vmd/vmd.o 00:02:05.310 CC lib/conf/conf.o 00:02:05.310 CC lib/json/json_parse.o 00:02:05.310 CC lib/idxd/idxd.o 00:02:05.310 CC lib/env_dpdk/env.o 00:02:05.310 CC lib/vmd/led.o 00:02:05.310 CC lib/rdma/common.o 00:02:05.310 CC lib/rdma/rdma_verbs.o 00:02:05.310 CC lib/json/json_util.o 00:02:05.310 CC lib/env_dpdk/memory.o 00:02:05.310 CC lib/idxd/idxd_user.o 00:02:05.310 CC lib/env_dpdk/pci.o 00:02:05.310 CC lib/json/json_write.o 00:02:05.310 CC lib/idxd/idxd_kernel.o 00:02:05.310 CC lib/env_dpdk/init.o 00:02:05.310 CC lib/env_dpdk/threads.o 00:02:05.310 CC lib/env_dpdk/pci_ioat.o 00:02:05.310 CC lib/env_dpdk/pci_virtio.o 00:02:05.310 CC lib/env_dpdk/pci_vmd.o 00:02:05.310 CC lib/env_dpdk/pci_idxd.o 00:02:05.310 CC lib/env_dpdk/pci_event.o 00:02:05.310 CC lib/env_dpdk/sigbus_handler.o 00:02:05.310 CC lib/env_dpdk/pci_dpdk.o 00:02:05.310 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.310 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.310 LIB libspdk_trace_parser.a 00:02:05.310 SO libspdk_trace_parser.so.5.0 00:02:05.310 SYMLINK libspdk_trace_parser.so 00:02:05.567 LIB libspdk_conf.a 00:02:05.567 SO libspdk_conf.so.6.0 00:02:05.567 LIB libspdk_rdma.a 00:02:05.567 SYMLINK libspdk_conf.so 00:02:05.567 LIB libspdk_json.a 00:02:05.567 SO libspdk_rdma.so.6.0 00:02:05.567 SO libspdk_json.so.6.0 00:02:05.567 SYMLINK libspdk_rdma.so 00:02:05.567 SYMLINK libspdk_json.so 00:02:05.824 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.824 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.824 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.824 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.824 LIB libspdk_idxd.a 00:02:05.824 SO libspdk_idxd.so.12.0 00:02:05.824 LIB libspdk_vmd.a 00:02:05.824 SYMLINK libspdk_idxd.so 00:02:05.824 SO libspdk_vmd.so.6.0 00:02:06.082 SYMLINK libspdk_vmd.so 00:02:06.082 LIB libspdk_jsonrpc.a 00:02:06.082 SO libspdk_jsonrpc.so.6.0 00:02:06.082 SYMLINK libspdk_jsonrpc.so 00:02:06.338 CC lib/rpc/rpc.o 00:02:06.596 LIB libspdk_rpc.a 00:02:06.596 SO libspdk_rpc.so.6.0 00:02:06.596 SYMLINK libspdk_rpc.so 00:02:06.853 CC lib/keyring/keyring.o 00:02:06.853 CC lib/notify/notify.o 00:02:06.853 CC lib/trace/trace.o 00:02:06.853 CC lib/keyring/keyring_rpc.o 00:02:06.853 CC lib/trace/trace_flags.o 00:02:06.853 CC lib/notify/notify_rpc.o 00:02:06.853 CC lib/trace/trace_rpc.o 00:02:06.853 LIB libspdk_notify.a 00:02:07.111 SO libspdk_notify.so.6.0 00:02:07.111 LIB libspdk_keyring.a 00:02:07.111 SYMLINK libspdk_notify.so 00:02:07.111 SO libspdk_keyring.so.1.0 00:02:07.111 LIB libspdk_trace.a 00:02:07.111 SO libspdk_trace.so.10.0 00:02:07.111 SYMLINK libspdk_keyring.so 00:02:07.111 SYMLINK libspdk_trace.so 00:02:07.368 LIB libspdk_env_dpdk.a 00:02:07.368 SO libspdk_env_dpdk.so.14.0 00:02:07.368 CC lib/thread/thread.o 00:02:07.368 CC lib/thread/iobuf.o 00:02:07.368 CC lib/sock/sock.o 00:02:07.368 CC lib/sock/sock_rpc.o 00:02:07.368 SYMLINK libspdk_env_dpdk.so 00:02:07.626 LIB libspdk_sock.a 00:02:07.626 SO libspdk_sock.so.9.0 00:02:07.883 SYMLINK libspdk_sock.so 00:02:07.883 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.883 CC lib/nvme/nvme_ctrlr.o 00:02:07.883 CC lib/nvme/nvme_fabric.o 00:02:07.883 CC lib/nvme/nvme_ns_cmd.o 00:02:07.883 CC lib/nvme/nvme_ns.o 00:02:07.883 CC lib/nvme/nvme_pcie_common.o 00:02:07.883 CC lib/nvme/nvme_pcie.o 00:02:07.883 CC lib/nvme/nvme_qpair.o 00:02:07.883 CC lib/nvme/nvme.o 00:02:07.883 CC lib/nvme/nvme_quirks.o 00:02:07.883 CC lib/nvme/nvme_transport.o 00:02:07.883 CC lib/nvme/nvme_discovery.o 00:02:07.883 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.883 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.883 CC lib/nvme/nvme_tcp.o 00:02:07.883 CC lib/nvme/nvme_opal.o 00:02:07.883 CC lib/nvme/nvme_io_msg.o 00:02:07.883 CC lib/nvme/nvme_poll_group.o 00:02:07.883 CC lib/nvme/nvme_zns.o 00:02:07.883 CC lib/nvme/nvme_stubs.o 00:02:07.883 CC lib/nvme/nvme_auth.o 00:02:07.883 CC lib/nvme/nvme_cuse.o 00:02:07.883 CC lib/nvme/nvme_vfio_user.o 00:02:07.883 CC lib/nvme/nvme_rdma.o 00:02:08.817 LIB libspdk_thread.a 00:02:08.817 SO libspdk_thread.so.10.0 00:02:09.076 SYMLINK libspdk_thread.so 00:02:09.076 CC lib/blob/blobstore.o 00:02:09.076 CC lib/vfu_tgt/tgt_endpoint.o 00:02:09.076 CC lib/blob/request.o 00:02:09.076 CC lib/accel/accel.o 00:02:09.076 CC lib/vfu_tgt/tgt_rpc.o 00:02:09.076 CC lib/blob/zeroes.o 00:02:09.076 CC lib/accel/accel_rpc.o 00:02:09.076 CC lib/init/json_config.o 00:02:09.076 CC lib/virtio/virtio.o 00:02:09.076 CC lib/blob/blob_bs_dev.o 00:02:09.076 CC lib/accel/accel_sw.o 00:02:09.076 CC lib/init/subsystem.o 00:02:09.076 CC lib/virtio/virtio_vhost_user.o 00:02:09.076 CC lib/init/subsystem_rpc.o 00:02:09.076 CC lib/virtio/virtio_vfio_user.o 00:02:09.076 CC lib/init/rpc.o 00:02:09.076 CC lib/virtio/virtio_pci.o 00:02:09.334 LIB libspdk_init.a 00:02:09.334 SO libspdk_init.so.5.0 00:02:09.591 LIB libspdk_virtio.a 00:02:09.591 LIB libspdk_vfu_tgt.a 00:02:09.591 SYMLINK libspdk_init.so 00:02:09.591 SO libspdk_vfu_tgt.so.3.0 00:02:09.591 SO libspdk_virtio.so.7.0 00:02:09.591 SYMLINK libspdk_vfu_tgt.so 00:02:09.591 SYMLINK libspdk_virtio.so 00:02:09.591 CC lib/event/app.o 00:02:09.591 CC lib/event/reactor.o 00:02:09.591 CC lib/event/log_rpc.o 00:02:09.591 CC lib/event/app_rpc.o 00:02:09.591 CC lib/event/scheduler_static.o 00:02:10.155 LIB libspdk_event.a 00:02:10.155 SO libspdk_event.so.13.1 00:02:10.155 SYMLINK libspdk_event.so 00:02:10.155 LIB libspdk_accel.a 00:02:10.155 SO libspdk_accel.so.15.0 00:02:10.155 SYMLINK libspdk_accel.so 00:02:10.155 LIB libspdk_nvme.a 00:02:10.412 SO libspdk_nvme.so.13.0 00:02:10.412 CC lib/bdev/bdev.o 00:02:10.412 CC lib/bdev/bdev_rpc.o 00:02:10.412 CC lib/bdev/bdev_zone.o 00:02:10.412 CC lib/bdev/part.o 00:02:10.412 CC lib/bdev/scsi_nvme.o 00:02:10.670 SYMLINK libspdk_nvme.so 00:02:12.044 LIB libspdk_blob.a 00:02:12.303 SO libspdk_blob.so.11.0 00:02:12.303 SYMLINK libspdk_blob.so 00:02:12.561 CC lib/lvol/lvol.o 00:02:12.561 CC lib/blobfs/blobfs.o 00:02:12.561 CC lib/blobfs/tree.o 00:02:13.128 LIB libspdk_bdev.a 00:02:13.128 SO libspdk_bdev.so.15.0 00:02:13.128 SYMLINK libspdk_bdev.so 00:02:13.395 LIB libspdk_blobfs.a 00:02:13.395 SO libspdk_blobfs.so.10.0 00:02:13.395 CC lib/ublk/ublk.o 00:02:13.395 CC lib/nbd/nbd.o 00:02:13.395 CC lib/scsi/dev.o 00:02:13.395 CC lib/nvmf/ctrlr.o 00:02:13.395 CC lib/ublk/ublk_rpc.o 00:02:13.395 CC lib/scsi/lun.o 00:02:13.395 CC lib/nbd/nbd_rpc.o 00:02:13.395 CC lib/nvmf/ctrlr_discovery.o 00:02:13.395 CC lib/ftl/ftl_core.o 00:02:13.395 CC lib/scsi/port.o 00:02:13.395 CC lib/nvmf/ctrlr_bdev.o 00:02:13.395 CC lib/ftl/ftl_init.o 00:02:13.395 CC lib/scsi/scsi.o 00:02:13.395 CC lib/nvmf/subsystem.o 00:02:13.395 CC lib/ftl/ftl_layout.o 00:02:13.395 CC lib/nvmf/nvmf.o 00:02:13.395 CC lib/scsi/scsi_bdev.o 00:02:13.395 CC lib/scsi/scsi_pr.o 00:02:13.395 CC lib/ftl/ftl_debug.o 00:02:13.395 CC lib/nvmf/nvmf_rpc.o 00:02:13.395 CC lib/ftl/ftl_io.o 00:02:13.395 CC lib/ftl/ftl_sb.o 00:02:13.395 CC lib/scsi/scsi_rpc.o 00:02:13.395 CC lib/nvmf/transport.o 00:02:13.395 CC lib/scsi/task.o 00:02:13.395 CC lib/nvmf/tcp.o 00:02:13.395 CC lib/nvmf/stubs.o 00:02:13.395 CC lib/ftl/ftl_l2p.o 00:02:13.395 CC lib/nvmf/mdns_server.o 00:02:13.395 CC lib/ftl/ftl_l2p_flat.o 00:02:13.395 CC lib/nvmf/vfio_user.o 00:02:13.395 CC lib/ftl/ftl_nv_cache.o 00:02:13.395 CC lib/nvmf/auth.o 00:02:13.395 CC lib/nvmf/rdma.o 00:02:13.395 CC lib/ftl/ftl_band.o 00:02:13.395 CC lib/ftl/ftl_band_ops.o 00:02:13.395 CC lib/ftl/ftl_writer.o 00:02:13.395 CC lib/ftl/ftl_rq.o 00:02:13.395 CC lib/ftl/ftl_reloc.o 00:02:13.395 CC lib/ftl/ftl_l2p_cache.o 00:02:13.395 CC lib/ftl/ftl_p2l.o 00:02:13.395 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.395 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.395 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.395 SYMLINK libspdk_blobfs.so 00:02:13.395 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.395 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.395 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.395 LIB libspdk_lvol.a 00:02:13.395 SO libspdk_lvol.so.10.0 00:02:13.660 SYMLINK libspdk_lvol.so 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.660 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.660 CC lib/ftl/utils/ftl_conf.o 00:02:13.660 CC lib/ftl/utils/ftl_md.o 00:02:13.924 CC lib/ftl/utils/ftl_mempool.o 00:02:13.924 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.924 CC lib/ftl/utils/ftl_property.o 00:02:13.924 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.924 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.924 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.924 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.924 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.924 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.924 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.924 CC lib/ftl/base/ftl_base_dev.o 00:02:13.924 CC lib/ftl/ftl_trace.o 00:02:14.183 LIB libspdk_nbd.a 00:02:14.183 SO libspdk_nbd.so.7.0 00:02:14.183 SYMLINK libspdk_nbd.so 00:02:14.183 LIB libspdk_scsi.a 00:02:14.440 SO libspdk_scsi.so.9.0 00:02:14.440 SYMLINK libspdk_scsi.so 00:02:14.440 LIB libspdk_ublk.a 00:02:14.440 SO libspdk_ublk.so.3.0 00:02:14.440 SYMLINK libspdk_ublk.so 00:02:14.700 CC lib/vhost/vhost.o 00:02:14.700 CC lib/iscsi/conn.o 00:02:14.700 CC lib/vhost/vhost_rpc.o 00:02:14.700 CC lib/iscsi/init_grp.o 00:02:14.700 CC lib/vhost/vhost_scsi.o 00:02:14.700 CC lib/iscsi/iscsi.o 00:02:14.700 CC lib/vhost/vhost_blk.o 00:02:14.700 CC lib/iscsi/md5.o 00:02:14.700 CC lib/vhost/rte_vhost_user.o 00:02:14.700 CC lib/iscsi/param.o 00:02:14.700 CC lib/iscsi/portal_grp.o 00:02:14.700 CC lib/iscsi/tgt_node.o 00:02:14.700 CC lib/iscsi/iscsi_subsystem.o 00:02:14.700 CC lib/iscsi/iscsi_rpc.o 00:02:14.700 CC lib/iscsi/task.o 00:02:14.700 LIB libspdk_ftl.a 00:02:15.004 SO libspdk_ftl.so.9.0 00:02:15.261 SYMLINK libspdk_ftl.so 00:02:15.826 LIB libspdk_vhost.a 00:02:15.826 SO libspdk_vhost.so.8.0 00:02:15.826 LIB libspdk_nvmf.a 00:02:16.083 SYMLINK libspdk_vhost.so 00:02:16.083 SO libspdk_nvmf.so.18.1 00:02:16.083 LIB libspdk_iscsi.a 00:02:16.083 SO libspdk_iscsi.so.8.0 00:02:16.083 SYMLINK libspdk_nvmf.so 00:02:16.083 SYMLINK libspdk_iscsi.so 00:02:16.341 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.341 CC module/vfu_device/vfu_virtio.o 00:02:16.341 CC module/vfu_device/vfu_virtio_blk.o 00:02:16.341 CC module/vfu_device/vfu_virtio_scsi.o 00:02:16.341 CC module/vfu_device/vfu_virtio_rpc.o 00:02:16.599 CC module/blob/bdev/blob_bdev.o 00:02:16.599 CC module/accel/iaa/accel_iaa.o 00:02:16.599 CC module/keyring/file/keyring.o 00:02:16.599 CC module/accel/dsa/accel_dsa.o 00:02:16.599 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.599 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.599 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.599 CC module/keyring/file/keyring_rpc.o 00:02:16.599 CC module/accel/ioat/accel_ioat.o 00:02:16.599 CC module/sock/posix/posix.o 00:02:16.599 CC module/accel/error/accel_error.o 00:02:16.599 CC module/keyring/linux/keyring.o 00:02:16.599 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.599 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.599 CC module/keyring/linux/keyring_rpc.o 00:02:16.599 CC module/accel/error/accel_error_rpc.o 00:02:16.599 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.599 LIB libspdk_env_dpdk_rpc.a 00:02:16.599 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.599 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.599 LIB libspdk_keyring_linux.a 00:02:16.599 LIB libspdk_keyring_file.a 00:02:16.857 LIB libspdk_scheduler_gscheduler.a 00:02:16.857 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.857 SO libspdk_keyring_linux.so.1.0 00:02:16.857 SO libspdk_keyring_file.so.1.0 00:02:16.857 LIB libspdk_accel_error.a 00:02:16.857 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.857 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.857 LIB libspdk_scheduler_dynamic.a 00:02:16.857 LIB libspdk_accel_ioat.a 00:02:16.857 SO libspdk_accel_error.so.2.0 00:02:16.857 LIB libspdk_accel_iaa.a 00:02:16.857 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.857 SO libspdk_accel_ioat.so.6.0 00:02:16.857 SYMLINK libspdk_keyring_file.so 00:02:16.857 SYMLINK libspdk_keyring_linux.so 00:02:16.857 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.857 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.857 SO libspdk_accel_iaa.so.3.0 00:02:16.857 SYMLINK libspdk_accel_error.so 00:02:16.857 LIB libspdk_blob_bdev.a 00:02:16.857 LIB libspdk_accel_dsa.a 00:02:16.857 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.857 SYMLINK libspdk_accel_ioat.so 00:02:16.857 SO libspdk_blob_bdev.so.11.0 00:02:16.857 SO libspdk_accel_dsa.so.5.0 00:02:16.857 SYMLINK libspdk_accel_iaa.so 00:02:16.857 SYMLINK libspdk_blob_bdev.so 00:02:16.857 SYMLINK libspdk_accel_dsa.so 00:02:17.116 LIB libspdk_vfu_device.a 00:02:17.116 SO libspdk_vfu_device.so.3.0 00:02:17.116 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.116 CC module/bdev/gpt/gpt.o 00:02:17.116 CC module/bdev/delay/vbdev_delay.o 00:02:17.116 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.116 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.116 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.116 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.116 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.116 CC module/bdev/malloc/bdev_malloc.o 00:02:17.116 CC module/bdev/error/vbdev_error.o 00:02:17.116 CC module/bdev/aio/bdev_aio.o 00:02:17.116 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.116 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.116 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.116 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.116 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.116 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.116 CC module/bdev/ftl/bdev_ftl.o 00:02:17.116 CC module/bdev/nvme/bdev_nvme.o 00:02:17.116 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.116 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.116 CC module/bdev/null/bdev_null.o 00:02:17.116 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.116 CC module/bdev/null/bdev_null_rpc.o 00:02:17.116 CC module/bdev/split/vbdev_split.o 00:02:17.116 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.116 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.116 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.116 CC module/bdev/nvme/nvme_rpc.o 00:02:17.116 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.116 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.116 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.116 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.116 CC module/bdev/nvme/vbdev_opal.o 00:02:17.116 CC module/bdev/raid/bdev_raid.o 00:02:17.116 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.116 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.116 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.116 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.116 CC module/bdev/raid/raid0.o 00:02:17.116 CC module/bdev/raid/raid1.o 00:02:17.116 CC module/bdev/raid/concat.o 00:02:17.376 SYMLINK libspdk_vfu_device.so 00:02:17.376 LIB libspdk_sock_posix.a 00:02:17.376 SO libspdk_sock_posix.so.6.0 00:02:17.635 LIB libspdk_blobfs_bdev.a 00:02:17.635 SYMLINK libspdk_sock_posix.so 00:02:17.635 SO libspdk_blobfs_bdev.so.6.0 00:02:17.635 LIB libspdk_bdev_error.a 00:02:17.635 LIB libspdk_bdev_split.a 00:02:17.635 SYMLINK libspdk_blobfs_bdev.so 00:02:17.635 SO libspdk_bdev_error.so.6.0 00:02:17.635 SO libspdk_bdev_split.so.6.0 00:02:17.635 LIB libspdk_bdev_null.a 00:02:17.635 LIB libspdk_bdev_gpt.a 00:02:17.635 SO libspdk_bdev_null.so.6.0 00:02:17.635 SYMLINK libspdk_bdev_error.so 00:02:17.635 SYMLINK libspdk_bdev_split.so 00:02:17.635 LIB libspdk_bdev_ftl.a 00:02:17.635 LIB libspdk_bdev_passthru.a 00:02:17.635 SO libspdk_bdev_gpt.so.6.0 00:02:17.635 LIB libspdk_bdev_iscsi.a 00:02:17.635 LIB libspdk_bdev_delay.a 00:02:17.635 SO libspdk_bdev_ftl.so.6.0 00:02:17.635 SO libspdk_bdev_passthru.so.6.0 00:02:17.893 SYMLINK libspdk_bdev_null.so 00:02:17.893 LIB libspdk_bdev_aio.a 00:02:17.893 LIB libspdk_bdev_zone_block.a 00:02:17.893 SO libspdk_bdev_iscsi.so.6.0 00:02:17.893 SO libspdk_bdev_delay.so.6.0 00:02:17.893 LIB libspdk_bdev_malloc.a 00:02:17.893 SYMLINK libspdk_bdev_gpt.so 00:02:17.893 SO libspdk_bdev_aio.so.6.0 00:02:17.893 SO libspdk_bdev_zone_block.so.6.0 00:02:17.893 SYMLINK libspdk_bdev_ftl.so 00:02:17.893 SO libspdk_bdev_malloc.so.6.0 00:02:17.893 SYMLINK libspdk_bdev_passthru.so 00:02:17.893 SYMLINK libspdk_bdev_iscsi.so 00:02:17.893 SYMLINK libspdk_bdev_delay.so 00:02:17.893 SYMLINK libspdk_bdev_aio.so 00:02:17.893 SYMLINK libspdk_bdev_zone_block.so 00:02:17.893 SYMLINK libspdk_bdev_malloc.so 00:02:17.893 LIB libspdk_bdev_lvol.a 00:02:17.893 SO libspdk_bdev_lvol.so.6.0 00:02:17.893 LIB libspdk_bdev_virtio.a 00:02:17.893 SO libspdk_bdev_virtio.so.6.0 00:02:17.893 SYMLINK libspdk_bdev_lvol.so 00:02:18.151 SYMLINK libspdk_bdev_virtio.so 00:02:18.411 LIB libspdk_bdev_raid.a 00:02:18.411 SO libspdk_bdev_raid.so.6.0 00:02:18.411 SYMLINK libspdk_bdev_raid.so 00:02:19.786 LIB libspdk_bdev_nvme.a 00:02:19.786 SO libspdk_bdev_nvme.so.7.0 00:02:19.786 SYMLINK libspdk_bdev_nvme.so 00:02:20.044 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.044 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.044 CC module/event/subsystems/sock/sock.o 00:02:20.044 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.044 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.044 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.044 CC module/event/subsystems/vmd/vmd.o 00:02:20.044 CC module/event/subsystems/keyring/keyring.o 00:02:20.044 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.044 LIB libspdk_event_keyring.a 00:02:20.044 LIB libspdk_event_vhost_blk.a 00:02:20.044 LIB libspdk_event_sock.a 00:02:20.044 LIB libspdk_event_scheduler.a 00:02:20.044 LIB libspdk_event_vmd.a 00:02:20.044 LIB libspdk_event_vfu_tgt.a 00:02:20.044 LIB libspdk_event_iobuf.a 00:02:20.303 SO libspdk_event_keyring.so.1.0 00:02:20.303 SO libspdk_event_vhost_blk.so.3.0 00:02:20.303 SO libspdk_event_sock.so.5.0 00:02:20.304 SO libspdk_event_scheduler.so.4.0 00:02:20.304 SO libspdk_event_vmd.so.6.0 00:02:20.304 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.304 SO libspdk_event_iobuf.so.3.0 00:02:20.304 SYMLINK libspdk_event_keyring.so 00:02:20.304 SYMLINK libspdk_event_vhost_blk.so 00:02:20.304 SYMLINK libspdk_event_sock.so 00:02:20.304 SYMLINK libspdk_event_scheduler.so 00:02:20.304 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.304 SYMLINK libspdk_event_vmd.so 00:02:20.304 SYMLINK libspdk_event_iobuf.so 00:02:20.563 CC module/event/subsystems/accel/accel.o 00:02:20.563 LIB libspdk_event_accel.a 00:02:20.563 SO libspdk_event_accel.so.6.0 00:02:20.563 SYMLINK libspdk_event_accel.so 00:02:20.824 CC module/event/subsystems/bdev/bdev.o 00:02:21.083 LIB libspdk_event_bdev.a 00:02:21.083 SO libspdk_event_bdev.so.6.0 00:02:21.083 SYMLINK libspdk_event_bdev.so 00:02:21.340 CC module/event/subsystems/scsi/scsi.o 00:02:21.340 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.340 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.340 CC module/event/subsystems/ublk/ublk.o 00:02:21.340 CC module/event/subsystems/nbd/nbd.o 00:02:21.340 LIB libspdk_event_ublk.a 00:02:21.340 LIB libspdk_event_nbd.a 00:02:21.340 LIB libspdk_event_scsi.a 00:02:21.340 SO libspdk_event_nbd.so.6.0 00:02:21.340 SO libspdk_event_ublk.so.3.0 00:02:21.340 SO libspdk_event_scsi.so.6.0 00:02:21.340 SYMLINK libspdk_event_ublk.so 00:02:21.340 SYMLINK libspdk_event_nbd.so 00:02:21.340 SYMLINK libspdk_event_scsi.so 00:02:21.599 LIB libspdk_event_nvmf.a 00:02:21.599 SO libspdk_event_nvmf.so.6.0 00:02:21.599 SYMLINK libspdk_event_nvmf.so 00:02:21.599 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.599 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.856 LIB libspdk_event_vhost_scsi.a 00:02:21.856 LIB libspdk_event_iscsi.a 00:02:21.856 SO libspdk_event_vhost_scsi.so.3.0 00:02:21.856 SO libspdk_event_iscsi.so.6.0 00:02:21.856 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.856 SYMLINK libspdk_event_iscsi.so 00:02:22.117 SO libspdk.so.6.0 00:02:22.117 SYMLINK libspdk.so 00:02:22.117 CC app/trace_record/trace_record.o 00:02:22.117 CC app/spdk_nvme_discover/discovery_aer.o 00:02:22.117 CXX app/trace/trace.o 00:02:22.117 CC app/spdk_lspci/spdk_lspci.o 00:02:22.117 CC app/spdk_top/spdk_top.o 00:02:22.117 CC app/spdk_nvme_identify/identify.o 00:02:22.117 TEST_HEADER include/spdk/accel.h 00:02:22.117 TEST_HEADER include/spdk/accel_module.h 00:02:22.117 CC test/rpc_client/rpc_client_test.o 00:02:22.117 TEST_HEADER include/spdk/assert.h 00:02:22.117 TEST_HEADER include/spdk/barrier.h 00:02:22.117 TEST_HEADER include/spdk/base64.h 00:02:22.117 TEST_HEADER include/spdk/bdev.h 00:02:22.117 TEST_HEADER include/spdk/bdev_module.h 00:02:22.117 CC app/spdk_nvme_perf/perf.o 00:02:22.117 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.380 TEST_HEADER include/spdk/bit_array.h 00:02:22.380 TEST_HEADER include/spdk/bit_pool.h 00:02:22.380 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.380 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.380 TEST_HEADER include/spdk/blobfs.h 00:02:22.380 TEST_HEADER include/spdk/blob.h 00:02:22.380 TEST_HEADER include/spdk/conf.h 00:02:22.380 TEST_HEADER include/spdk/config.h 00:02:22.380 TEST_HEADER include/spdk/cpuset.h 00:02:22.380 TEST_HEADER include/spdk/crc16.h 00:02:22.380 TEST_HEADER include/spdk/crc32.h 00:02:22.381 TEST_HEADER include/spdk/crc64.h 00:02:22.381 CC app/spdk_dd/spdk_dd.o 00:02:22.381 TEST_HEADER include/spdk/dif.h 00:02:22.381 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.381 TEST_HEADER include/spdk/dma.h 00:02:22.381 TEST_HEADER include/spdk/endian.h 00:02:22.381 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.381 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.381 TEST_HEADER include/spdk/env.h 00:02:22.381 CC app/nvmf_tgt/nvmf_main.o 00:02:22.381 TEST_HEADER include/spdk/event.h 00:02:22.381 TEST_HEADER include/spdk/fd_group.h 00:02:22.381 TEST_HEADER include/spdk/fd.h 00:02:22.381 TEST_HEADER include/spdk/file.h 00:02:22.381 CC app/vhost/vhost.o 00:02:22.381 TEST_HEADER include/spdk/ftl.h 00:02:22.381 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.381 TEST_HEADER include/spdk/hexlify.h 00:02:22.381 TEST_HEADER include/spdk/histogram_data.h 00:02:22.381 TEST_HEADER include/spdk/idxd.h 00:02:22.381 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.381 TEST_HEADER include/spdk/init.h 00:02:22.381 TEST_HEADER include/spdk/ioat.h 00:02:22.381 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.381 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.381 CC app/spdk_tgt/spdk_tgt.o 00:02:22.381 TEST_HEADER include/spdk/json.h 00:02:22.381 CC examples/nvme/hello_world/hello_world.o 00:02:22.381 CC app/fio/nvme/fio_plugin.o 00:02:22.381 CC examples/nvme/arbitration/arbitration.o 00:02:22.381 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.381 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.381 CC examples/nvme/abort/abort.o 00:02:22.381 CC examples/nvme/reconnect/reconnect.o 00:02:22.381 TEST_HEADER include/spdk/keyring.h 00:02:22.381 CC examples/idxd/perf/perf.o 00:02:22.381 CC examples/nvme/hotplug/hotplug.o 00:02:22.381 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.381 CC examples/accel/perf/accel_perf.o 00:02:22.381 CC examples/util/zipf/zipf.o 00:02:22.381 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.381 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.381 TEST_HEADER include/spdk/keyring_module.h 00:02:22.381 CC test/thread/poller_perf/poller_perf.o 00:02:22.381 TEST_HEADER include/spdk/likely.h 00:02:22.381 CC examples/sock/hello_world/hello_sock.o 00:02:22.381 TEST_HEADER include/spdk/log.h 00:02:22.381 TEST_HEADER include/spdk/lvol.h 00:02:22.381 CC test/nvme/aer/aer.o 00:02:22.381 CC examples/ioat/perf/perf.o 00:02:22.381 TEST_HEADER include/spdk/memory.h 00:02:22.381 CC test/event/event_perf/event_perf.o 00:02:22.381 TEST_HEADER include/spdk/mmio.h 00:02:22.381 TEST_HEADER include/spdk/nbd.h 00:02:22.381 TEST_HEADER include/spdk/notify.h 00:02:22.381 TEST_HEADER include/spdk/nvme.h 00:02:22.381 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.381 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.381 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.381 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.381 CC examples/blob/cli/blobcli.o 00:02:22.381 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.381 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.381 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.381 CC examples/blob/hello_world/hello_blob.o 00:02:22.381 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.381 CC test/bdev/bdevio/bdevio.o 00:02:22.381 CC test/dma/test_dma/test_dma.o 00:02:22.381 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.381 CC examples/nvmf/nvmf/nvmf.o 00:02:22.381 CC examples/thread/thread/thread_ex.o 00:02:22.381 CC test/app/bdev_svc/bdev_svc.o 00:02:22.381 TEST_HEADER include/spdk/nvmf.h 00:02:22.381 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.381 CC test/blobfs/mkfs/mkfs.o 00:02:22.381 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.381 TEST_HEADER include/spdk/opal.h 00:02:22.381 CC test/accel/dif/dif.o 00:02:22.381 TEST_HEADER include/spdk/opal_spec.h 00:02:22.381 TEST_HEADER include/spdk/pci_ids.h 00:02:22.381 TEST_HEADER include/spdk/pipe.h 00:02:22.381 TEST_HEADER include/spdk/queue.h 00:02:22.381 CC app/fio/bdev/fio_plugin.o 00:02:22.381 TEST_HEADER include/spdk/reduce.h 00:02:22.381 TEST_HEADER include/spdk/rpc.h 00:02:22.381 TEST_HEADER include/spdk/scheduler.h 00:02:22.381 TEST_HEADER include/spdk/scsi.h 00:02:22.381 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.381 TEST_HEADER include/spdk/sock.h 00:02:22.642 TEST_HEADER include/spdk/stdinc.h 00:02:22.642 TEST_HEADER include/spdk/string.h 00:02:22.642 LINK spdk_lspci 00:02:22.642 TEST_HEADER include/spdk/thread.h 00:02:22.642 TEST_HEADER include/spdk/trace.h 00:02:22.642 CC test/env/mem_callbacks/mem_callbacks.o 00:02:22.642 TEST_HEADER include/spdk/trace_parser.h 00:02:22.642 TEST_HEADER include/spdk/tree.h 00:02:22.642 TEST_HEADER include/spdk/ublk.h 00:02:22.642 TEST_HEADER include/spdk/util.h 00:02:22.642 TEST_HEADER include/spdk/uuid.h 00:02:22.642 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.642 TEST_HEADER include/spdk/version.h 00:02:22.642 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.642 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.642 TEST_HEADER include/spdk/vhost.h 00:02:22.642 TEST_HEADER include/spdk/vmd.h 00:02:22.642 TEST_HEADER include/spdk/xor.h 00:02:22.642 CC test/lvol/esnap/esnap.o 00:02:22.642 TEST_HEADER include/spdk/zipf.h 00:02:22.642 CXX test/cpp_headers/accel.o 00:02:22.642 LINK rpc_client_test 00:02:22.642 LINK spdk_nvme_discover 00:02:22.642 LINK lsvmd 00:02:22.642 LINK interrupt_tgt 00:02:22.642 LINK nvmf_tgt 00:02:22.642 LINK zipf 00:02:22.642 LINK poller_perf 00:02:22.642 LINK event_perf 00:02:22.642 LINK vhost 00:02:22.642 LINK spdk_trace_record 00:02:22.642 LINK iscsi_tgt 00:02:22.642 LINK pmr_persistence 00:02:22.642 LINK cmb_copy 00:02:22.905 LINK spdk_tgt 00:02:22.905 LINK ioat_perf 00:02:22.905 LINK bdev_svc 00:02:22.905 LINK hello_world 00:02:22.905 LINK hotplug 00:02:22.905 LINK hello_sock 00:02:22.905 LINK mkfs 00:02:22.905 LINK hello_bdev 00:02:22.905 LINK hello_blob 00:02:22.905 CXX test/cpp_headers/accel_module.o 00:02:22.905 LINK thread 00:02:22.905 LINK aer 00:02:22.905 LINK spdk_dd 00:02:22.905 LINK arbitration 00:02:22.905 CC test/event/reactor/reactor.o 00:02:22.905 LINK idxd_perf 00:02:23.169 LINK nvmf 00:02:23.169 LINK reconnect 00:02:23.169 CC test/app/histogram_perf/histogram_perf.o 00:02:23.169 LINK abort 00:02:23.169 LINK spdk_trace 00:02:23.169 CXX test/cpp_headers/assert.o 00:02:23.169 CXX test/cpp_headers/barrier.o 00:02:23.169 CC test/env/vtophys/vtophys.o 00:02:23.169 CC examples/vmd/led/led.o 00:02:23.169 CC test/nvme/reset/reset.o 00:02:23.169 CXX test/cpp_headers/base64.o 00:02:23.169 LINK bdevio 00:02:23.169 CC test/app/jsoncat/jsoncat.o 00:02:23.169 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.169 CC test/event/reactor_perf/reactor_perf.o 00:02:23.169 LINK test_dma 00:02:23.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.169 CXX test/cpp_headers/bdev.o 00:02:23.169 CC examples/ioat/verify/verify.o 00:02:23.169 CC test/app/stub/stub.o 00:02:23.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:23.439 LINK accel_perf 00:02:23.439 LINK dif 00:02:23.439 LINK reactor 00:02:23.439 LINK nvme_manage 00:02:23.439 LINK histogram_perf 00:02:23.439 CXX test/cpp_headers/bdev_module.o 00:02:23.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:23.439 LINK nvme_fuzz 00:02:23.439 CC test/nvme/sgl/sgl.o 00:02:23.439 CC test/env/memory/memory_ut.o 00:02:23.439 LINK blobcli 00:02:23.439 CC test/event/app_repeat/app_repeat.o 00:02:23.439 CC test/env/pci/pci_ut.o 00:02:23.439 LINK spdk_bdev 00:02:23.439 LINK spdk_nvme 00:02:23.439 CXX test/cpp_headers/bdev_zone.o 00:02:23.439 CXX test/cpp_headers/bit_array.o 00:02:23.439 LINK vtophys 00:02:23.439 LINK jsoncat 00:02:23.439 LINK led 00:02:23.439 CXX test/cpp_headers/bit_pool.o 00:02:23.439 CC test/nvme/overhead/overhead.o 00:02:23.439 CC test/nvme/e2edp/nvme_dp.o 00:02:23.439 LINK reactor_perf 00:02:23.439 CC test/nvme/err_injection/err_injection.o 00:02:23.439 CC test/nvme/startup/startup.o 00:02:23.439 CC test/event/scheduler/scheduler.o 00:02:23.439 CC test/nvme/simple_copy/simple_copy.o 00:02:23.439 CC test/nvme/reserve/reserve.o 00:02:23.716 CC test/nvme/connect_stress/connect_stress.o 00:02:23.716 LINK env_dpdk_post_init 00:02:23.716 CXX test/cpp_headers/blob_bdev.o 00:02:23.716 CC test/nvme/compliance/nvme_compliance.o 00:02:23.716 CC test/nvme/boot_partition/boot_partition.o 00:02:23.716 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.716 CXX test/cpp_headers/blobfs.o 00:02:23.716 CXX test/cpp_headers/blob.o 00:02:23.716 LINK stub 00:02:23.716 CXX test/cpp_headers/conf.o 00:02:23.716 CXX test/cpp_headers/config.o 00:02:23.716 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.716 CXX test/cpp_headers/cpuset.o 00:02:23.716 CXX test/cpp_headers/crc16.o 00:02:23.716 LINK verify 00:02:23.716 CXX test/cpp_headers/crc32.o 00:02:23.716 LINK app_repeat 00:02:23.716 CXX test/cpp_headers/crc64.o 00:02:23.716 LINK reset 00:02:23.716 LINK mem_callbacks 00:02:23.716 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.716 CXX test/cpp_headers/dif.o 00:02:23.716 CXX test/cpp_headers/dma.o 00:02:23.716 LINK spdk_nvme_perf 00:02:23.716 CC test/nvme/fdp/fdp.o 00:02:23.716 CXX test/cpp_headers/endian.o 00:02:23.716 CXX test/cpp_headers/env_dpdk.o 00:02:23.716 CXX test/cpp_headers/env.o 00:02:23.716 CXX test/cpp_headers/event.o 00:02:23.716 CC test/nvme/cuse/cuse.o 00:02:23.716 CXX test/cpp_headers/fd_group.o 00:02:23.716 CXX test/cpp_headers/fd.o 00:02:23.979 LINK spdk_nvme_identify 00:02:23.979 CXX test/cpp_headers/file.o 00:02:23.979 LINK sgl 00:02:23.979 LINK startup 00:02:23.979 LINK err_injection 00:02:23.979 LINK bdevperf 00:02:23.979 CXX test/cpp_headers/ftl.o 00:02:23.979 LINK connect_stress 00:02:23.979 CXX test/cpp_headers/gpt_spec.o 00:02:23.979 LINK reserve 00:02:23.979 LINK boot_partition 00:02:23.979 CXX test/cpp_headers/hexlify.o 00:02:23.979 LINK spdk_top 00:02:23.979 LINK simple_copy 00:02:23.979 CXX test/cpp_headers/histogram_data.o 00:02:23.979 LINK scheduler 00:02:23.979 CXX test/cpp_headers/idxd.o 00:02:23.979 LINK nvme_dp 00:02:23.979 CXX test/cpp_headers/idxd_spec.o 00:02:23.979 LINK overhead 00:02:23.979 CXX test/cpp_headers/init.o 00:02:23.979 CXX test/cpp_headers/ioat.o 00:02:23.979 CXX test/cpp_headers/ioat_spec.o 00:02:23.979 CXX test/cpp_headers/iscsi_spec.o 00:02:23.979 CXX test/cpp_headers/json.o 00:02:23.979 CXX test/cpp_headers/jsonrpc.o 00:02:23.979 CXX test/cpp_headers/keyring.o 00:02:23.979 CXX test/cpp_headers/keyring_module.o 00:02:24.240 CXX test/cpp_headers/likely.o 00:02:24.240 LINK fused_ordering 00:02:24.240 CXX test/cpp_headers/log.o 00:02:24.240 LINK vhost_fuzz 00:02:24.240 LINK pci_ut 00:02:24.240 CXX test/cpp_headers/lvol.o 00:02:24.240 CXX test/cpp_headers/memory.o 00:02:24.240 LINK doorbell_aers 00:02:24.240 CXX test/cpp_headers/mmio.o 00:02:24.240 CXX test/cpp_headers/nbd.o 00:02:24.240 CXX test/cpp_headers/notify.o 00:02:24.240 CXX test/cpp_headers/nvme.o 00:02:24.240 CXX test/cpp_headers/nvme_intel.o 00:02:24.240 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.240 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.240 CXX test/cpp_headers/nvme_spec.o 00:02:24.240 CXX test/cpp_headers/nvme_zns.o 00:02:24.240 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.240 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.240 CXX test/cpp_headers/nvmf.o 00:02:24.240 LINK nvme_compliance 00:02:24.240 CXX test/cpp_headers/nvmf_spec.o 00:02:24.240 CXX test/cpp_headers/nvmf_transport.o 00:02:24.240 CXX test/cpp_headers/opal.o 00:02:24.240 CXX test/cpp_headers/opal_spec.o 00:02:24.240 CXX test/cpp_headers/pci_ids.o 00:02:24.240 CXX test/cpp_headers/pipe.o 00:02:24.240 CXX test/cpp_headers/reduce.o 00:02:24.240 CXX test/cpp_headers/queue.o 00:02:24.501 CXX test/cpp_headers/rpc.o 00:02:24.501 CXX test/cpp_headers/scheduler.o 00:02:24.501 CXX test/cpp_headers/scsi.o 00:02:24.501 CXX test/cpp_headers/scsi_spec.o 00:02:24.501 CXX test/cpp_headers/sock.o 00:02:24.501 CXX test/cpp_headers/stdinc.o 00:02:24.501 CXX test/cpp_headers/string.o 00:02:24.501 CXX test/cpp_headers/thread.o 00:02:24.501 CXX test/cpp_headers/trace.o 00:02:24.501 CXX test/cpp_headers/trace_parser.o 00:02:24.501 CXX test/cpp_headers/tree.o 00:02:24.501 CXX test/cpp_headers/ublk.o 00:02:24.501 CXX test/cpp_headers/util.o 00:02:24.501 CXX test/cpp_headers/uuid.o 00:02:24.501 CXX test/cpp_headers/version.o 00:02:24.501 LINK fdp 00:02:24.501 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.501 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.501 CXX test/cpp_headers/vhost.o 00:02:24.501 CXX test/cpp_headers/vmd.o 00:02:24.501 CXX test/cpp_headers/xor.o 00:02:24.501 CXX test/cpp_headers/zipf.o 00:02:25.435 LINK memory_ut 00:02:25.692 LINK cuse 00:02:25.692 LINK iscsi_fuzz 00:02:28.227 LINK esnap 00:02:28.793 00:02:28.793 real 0m47.991s 00:02:28.793 user 10m14.273s 00:02:28.793 sys 2m30.156s 00:02:28.793 11:39:18 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:28.793 11:39:18 make -- common/autotest_common.sh@10 -- $ set +x 00:02:28.793 ************************************ 00:02:28.793 END TEST make 00:02:28.793 ************************************ 00:02:28.793 11:39:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:28.793 11:39:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:28.793 11:39:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:28.793 11:39:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.793 11:39:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:28.793 11:39:18 -- pm/common@44 -- $ pid=715204 00:02:28.793 11:39:18 -- pm/common@50 -- $ kill -TERM 715204 00:02:28.793 11:39:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.793 11:39:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:28.793 11:39:18 -- pm/common@44 -- $ pid=715206 00:02:28.793 11:39:18 -- pm/common@50 -- $ kill -TERM 715206 00:02:28.793 11:39:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.793 11:39:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:28.793 11:39:18 -- pm/common@44 -- $ pid=715207 00:02:28.793 11:39:18 -- pm/common@50 -- $ kill -TERM 715207 00:02:28.793 11:39:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.793 11:39:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:28.793 11:39:18 -- pm/common@44 -- $ pid=715234 00:02:28.793 11:39:18 -- pm/common@50 -- $ sudo -E kill -TERM 715234 00:02:28.793 11:39:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.793 11:39:18 -- nvmf/common.sh@7 -- # uname -s 00:02:28.793 11:39:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.793 11:39:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.793 11:39:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.793 11:39:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.793 11:39:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.793 11:39:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.793 11:39:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.793 11:39:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.793 11:39:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.793 11:39:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.793 11:39:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:28.793 11:39:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:28.793 11:39:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.793 11:39:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.793 11:39:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.793 11:39:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:28.793 11:39:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:28.793 11:39:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.793 11:39:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.793 11:39:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.793 11:39:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.793 11:39:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.793 11:39:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.793 11:39:18 -- paths/export.sh@5 -- # export PATH 00:02:28.793 11:39:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.793 11:39:18 -- nvmf/common.sh@47 -- # : 0 00:02:28.793 11:39:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:28.794 11:39:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:28.794 11:39:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:28.794 11:39:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.794 11:39:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.794 11:39:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:28.794 11:39:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:28.794 11:39:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:28.794 11:39:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.794 11:39:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.794 11:39:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.794 11:39:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.794 11:39:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:28.794 11:39:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.794 11:39:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:28.794 11:39:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.794 11:39:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.794 11:39:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.794 11:39:18 -- spdk/autotest.sh@48 -- # udevadm_pid=771111 00:02:28.794 11:39:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.794 11:39:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:28.794 11:39:18 -- pm/common@17 -- # local monitor 00:02:28.794 11:39:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.794 11:39:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.794 11:39:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.794 11:39:18 -- pm/common@21 -- # date +%s 00:02:28.794 11:39:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.794 11:39:18 -- pm/common@21 -- # date +%s 00:02:28.794 11:39:18 -- pm/common@25 -- # sleep 1 00:02:28.794 11:39:18 -- pm/common@21 -- # date +%s 00:02:28.794 11:39:18 -- pm/common@21 -- # date +%s 00:02:28.794 11:39:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720777158 00:02:28.794 11:39:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720777158 00:02:28.794 11:39:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720777158 00:02:28.794 11:39:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720777158 00:02:28.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720777158_collect-vmstat.pm.log 00:02:28.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720777158_collect-cpu-load.pm.log 00:02:28.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720777158_collect-cpu-temp.pm.log 00:02:28.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720777158_collect-bmc-pm.bmc.pm.log 00:02:29.731 11:39:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.731 11:39:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:29.731 11:39:19 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:29.731 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:02:29.731 11:39:19 -- spdk/autotest.sh@59 -- # create_test_list 00:02:29.731 11:39:19 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:29.731 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:02:29.989 11:39:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:29.989 11:39:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.989 11:39:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.989 11:39:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:29.989 11:39:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.989 11:39:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:29.989 11:39:19 -- common/autotest_common.sh@1454 -- # uname 00:02:29.989 11:39:19 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:29.989 11:39:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:29.989 11:39:19 -- common/autotest_common.sh@1474 -- # uname 00:02:29.989 11:39:19 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:29.989 11:39:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:29.989 11:39:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:29.989 11:39:19 -- spdk/autotest.sh@72 -- # hash lcov 00:02:29.989 11:39:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:29.989 11:39:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:29.989 --rc lcov_branch_coverage=1 00:02:29.989 --rc lcov_function_coverage=1 00:02:29.989 --rc genhtml_branch_coverage=1 00:02:29.989 --rc genhtml_function_coverage=1 00:02:29.989 --rc genhtml_legend=1 00:02:29.989 --rc geninfo_all_blocks=1 00:02:29.989 ' 00:02:29.989 11:39:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:29.989 --rc lcov_branch_coverage=1 00:02:29.989 --rc lcov_function_coverage=1 00:02:29.989 --rc genhtml_branch_coverage=1 00:02:29.989 --rc genhtml_function_coverage=1 00:02:29.989 --rc genhtml_legend=1 00:02:29.989 --rc geninfo_all_blocks=1 00:02:29.989 ' 00:02:29.989 11:39:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:29.989 --rc lcov_branch_coverage=1 00:02:29.989 --rc lcov_function_coverage=1 00:02:29.989 --rc genhtml_branch_coverage=1 00:02:29.989 --rc genhtml_function_coverage=1 00:02:29.989 --rc genhtml_legend=1 00:02:29.989 --rc geninfo_all_blocks=1 00:02:29.989 --no-external' 00:02:29.989 11:39:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:29.989 --rc lcov_branch_coverage=1 00:02:29.989 --rc lcov_function_coverage=1 00:02:29.989 --rc genhtml_branch_coverage=1 00:02:29.989 --rc genhtml_function_coverage=1 00:02:29.989 --rc genhtml_legend=1 00:02:29.989 --rc geninfo_all_blocks=1 00:02:29.989 --no-external' 00:02:29.989 11:39:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:29.989 lcov: LCOV version 1.14 00:02:29.989 11:39:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:35.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:35.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:35.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:35.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:35.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:35.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:35.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:35.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:35.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:35.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:35.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:35.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:35.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:35.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:35.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:35.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:35.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:35.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:35.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:35.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:35.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:35.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:35.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:35.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:35.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:35.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:35.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:57.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:57.723 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.275 11:39:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.275 11:39:53 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:04.275 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:03:04.275 11:39:53 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.275 11:39:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.841 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:04.841 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:04.841 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:04.841 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:04.841 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:04.841 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:04.841 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:04.841 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:04.841 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:04.841 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:04.841 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:05.099 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:05.099 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:05.099 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:05.099 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:05.099 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:05.099 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:05.099 11:39:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:05.099 11:39:54 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:05.099 11:39:54 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:05.099 11:39:54 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:05.099 11:39:54 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:05.099 11:39:54 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:05.099 11:39:54 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:05.099 11:39:54 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.099 11:39:54 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:05.099 11:39:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:05.099 11:39:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.099 11:39:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:05.099 11:39:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:05.099 11:39:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:05.099 11:39:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.099 No valid GPT data, bailing 00:03:05.099 11:39:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.099 11:39:54 -- scripts/common.sh@391 -- # pt= 00:03:05.099 11:39:54 -- scripts/common.sh@392 -- # return 1 00:03:05.099 11:39:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.099 1+0 records in 00:03:05.099 1+0 records out 00:03:05.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00221701 s, 473 MB/s 00:03:05.099 11:39:54 -- spdk/autotest.sh@118 -- # sync 00:03:05.099 11:39:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.099 11:39:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.099 11:39:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:07.631 11:39:56 -- spdk/autotest.sh@124 -- # uname -s 00:03:07.631 11:39:56 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:07.631 11:39:56 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.631 11:39:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:07.631 11:39:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:07.631 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 START TEST setup.sh 00:03:07.631 ************************************ 00:03:07.631 11:39:56 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.631 * Looking for test storage... 00:03:07.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.631 11:39:56 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:07.631 11:39:56 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:07.631 11:39:56 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.631 11:39:56 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:07.631 11:39:56 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:07.631 11:39:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 START TEST acl 00:03:07.631 ************************************ 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.631 * Looking for test storage... 00:03:07.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.631 11:39:56 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:07.631 11:39:56 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:07.631 11:39:56 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.631 11:39:56 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.005 11:39:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.005 11:39:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:09.005 11:39:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.005 11:39:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:09.005 11:39:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.005 11:39:58 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.943 Hugepages 00:03:09.943 node hugesize free / total 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 00:03:09.943 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.943 11:39:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.943 11:39:59 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:09.943 11:39:59 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:09.943 11:39:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.943 ************************************ 00:03:09.943 START TEST denied 00:03:09.943 ************************************ 00:03:09.943 11:39:59 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:09.943 11:39:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:09.943 11:39:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:09.943 11:39:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:09.944 11:39:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.944 11:39:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.845 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.845 11:40:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.380 00:03:14.380 real 0m3.969s 00:03:14.380 user 0m1.134s 00:03:14.380 sys 0m1.860s 00:03:14.380 11:40:03 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:14.380 11:40:03 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.380 ************************************ 00:03:14.380 END TEST denied 00:03:14.380 ************************************ 00:03:14.380 11:40:03 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.380 11:40:03 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:14.380 11:40:03 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:14.380 11:40:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.380 ************************************ 00:03:14.380 START TEST allowed 00:03:14.380 ************************************ 00:03:14.380 11:40:03 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:14.380 11:40:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.380 11:40:03 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:14.380 11:40:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.380 11:40:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.380 11:40:03 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.970 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.970 11:40:05 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:16.970 11:40:05 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:16.970 11:40:05 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:16.970 11:40:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.970 11:40:05 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.348 00:03:18.348 real 0m4.074s 00:03:18.348 user 0m1.077s 00:03:18.348 sys 0m1.849s 00:03:18.348 11:40:07 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:18.348 11:40:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.348 ************************************ 00:03:18.348 END TEST allowed 00:03:18.348 ************************************ 00:03:18.348 00:03:18.348 real 0m10.831s 00:03:18.348 user 0m3.330s 00:03:18.348 sys 0m5.446s 00:03:18.348 11:40:07 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:18.348 11:40:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.348 ************************************ 00:03:18.348 END TEST acl 00:03:18.348 ************************************ 00:03:18.348 11:40:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.348 11:40:07 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.348 11:40:07 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.348 11:40:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.348 ************************************ 00:03:18.348 START TEST hugepages 00:03:18.348 ************************************ 00:03:18.348 11:40:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.348 * Looking for test storage... 00:03:18.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44830936 kB' 'MemAvailable: 48286876 kB' 'Buffers: 2704 kB' 'Cached: 9254108 kB' 'SwapCached: 0 kB' 'Active: 6238264 kB' 'Inactive: 3481172 kB' 'Active(anon): 5853048 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465952 kB' 'Mapped: 207104 kB' 'Shmem: 5390424 kB' 'KReclaimable: 166348 kB' 'Slab: 497092 kB' 'SReclaimable: 166348 kB' 'SUnreclaim: 330744 kB' 'KernelStack: 12800 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 6973196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195812 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.349 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.350 11:40:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.350 11:40:07 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.350 11:40:07 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.350 11:40:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.350 ************************************ 00:03:18.350 START TEST default_setup 00:03:18.350 ************************************ 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.350 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.351 11:40:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.729 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.729 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.729 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.672 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46945492 kB' 'MemAvailable: 50401476 kB' 'Buffers: 2704 kB' 'Cached: 9254200 kB' 'SwapCached: 0 kB' 'Active: 6251304 kB' 'Inactive: 3481172 kB' 'Active(anon): 5866088 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478516 kB' 'Mapped: 206000 kB' 'Shmem: 5390516 kB' 'KReclaimable: 166436 kB' 'Slab: 496584 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330148 kB' 'KernelStack: 12656 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.672 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.673 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46945008 kB' 'MemAvailable: 50400992 kB' 'Buffers: 2704 kB' 'Cached: 9254204 kB' 'SwapCached: 0 kB' 'Active: 6250868 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865652 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478088 kB' 'Mapped: 205968 kB' 'Shmem: 5390520 kB' 'KReclaimable: 166436 kB' 'Slab: 496600 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330164 kB' 'KernelStack: 12704 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.674 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46944756 kB' 'MemAvailable: 50400740 kB' 'Buffers: 2704 kB' 'Cached: 9254220 kB' 'SwapCached: 0 kB' 'Active: 6250452 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865236 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478064 kB' 'Mapped: 205968 kB' 'Shmem: 5390536 kB' 'KReclaimable: 166436 kB' 'Slab: 496632 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330196 kB' 'KernelStack: 12672 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.675 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.939 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.940 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.941 nr_hugepages=1024 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.941 resv_hugepages=0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.941 surplus_hugepages=0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.941 anon_hugepages=0 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46944592 kB' 'MemAvailable: 50400576 kB' 'Buffers: 2704 kB' 'Cached: 9254240 kB' 'SwapCached: 0 kB' 'Active: 6250420 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865204 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478032 kB' 'Mapped: 205968 kB' 'Shmem: 5390556 kB' 'KReclaimable: 166436 kB' 'Slab: 496632 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330196 kB' 'KernelStack: 12656 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.941 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.942 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27710452 kB' 'MemUsed: 5119432 kB' 'SwapCached: 0 kB' 'Active: 1996268 kB' 'Inactive: 98144 kB' 'Active(anon): 1890268 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841580 kB' 'Mapped: 46712 kB' 'AnonPages: 256080 kB' 'Shmem: 1637436 kB' 'KernelStack: 7544 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 275832 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 194500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.944 node0=1024 expecting 1024 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.944 00:03:20.944 real 0m2.549s 00:03:20.944 user 0m0.711s 00:03:20.944 sys 0m0.980s 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:20.944 11:40:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.944 ************************************ 00:03:20.944 END TEST default_setup 00:03:20.944 ************************************ 00:03:20.944 11:40:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.944 11:40:10 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:20.944 11:40:10 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:20.944 11:40:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.944 ************************************ 00:03:20.944 START TEST per_node_1G_alloc 00:03:20.944 ************************************ 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.944 11:40:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.330 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.330 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.330 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.330 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.330 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.330 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.330 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.330 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.330 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.330 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.330 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.330 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.330 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.330 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.330 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.330 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.330 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46951700 kB' 'MemAvailable: 50407684 kB' 'Buffers: 2704 kB' 'Cached: 9254320 kB' 'SwapCached: 0 kB' 'Active: 6251392 kB' 'Inactive: 3481172 kB' 'Active(anon): 5866176 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478652 kB' 'Mapped: 206164 kB' 'Shmem: 5390636 kB' 'KReclaimable: 166436 kB' 'Slab: 496608 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330172 kB' 'KernelStack: 12688 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46951872 kB' 'MemAvailable: 50407856 kB' 'Buffers: 2704 kB' 'Cached: 9254328 kB' 'SwapCached: 0 kB' 'Active: 6251548 kB' 'Inactive: 3481172 kB' 'Active(anon): 5866332 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478848 kB' 'Mapped: 206152 kB' 'Shmem: 5390644 kB' 'KReclaimable: 166436 kB' 'Slab: 496592 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330156 kB' 'KernelStack: 12736 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.332 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46952164 kB' 'MemAvailable: 50408148 kB' 'Buffers: 2704 kB' 'Cached: 9254332 kB' 'SwapCached: 0 kB' 'Active: 6251016 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865800 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478320 kB' 'Mapped: 205980 kB' 'Shmem: 5390648 kB' 'KReclaimable: 166436 kB' 'Slab: 496688 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330252 kB' 'KernelStack: 12720 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.334 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.335 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.336 nr_hugepages=1024 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.336 resv_hugepages=0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.336 surplus_hugepages=0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.336 anon_hugepages=0 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46953112 kB' 'MemAvailable: 50409096 kB' 'Buffers: 2704 kB' 'Cached: 9254364 kB' 'SwapCached: 0 kB' 'Active: 6250936 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865720 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478160 kB' 'Mapped: 205980 kB' 'Shmem: 5390680 kB' 'KReclaimable: 166436 kB' 'Slab: 496688 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330252 kB' 'KernelStack: 12704 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6987880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.336 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.337 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28771512 kB' 'MemUsed: 4058372 kB' 'SwapCached: 0 kB' 'Active: 1995896 kB' 'Inactive: 98144 kB' 'Active(anon): 1889896 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841688 kB' 'Mapped: 46712 kB' 'AnonPages: 255516 kB' 'Shmem: 1637544 kB' 'KernelStack: 7576 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 275752 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 194420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.338 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18182796 kB' 'MemUsed: 9529028 kB' 'SwapCached: 0 kB' 'Active: 4255008 kB' 'Inactive: 3383028 kB' 'Active(anon): 3975792 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3383028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7415404 kB' 'Mapped: 159268 kB' 'AnonPages: 222636 kB' 'Shmem: 3753160 kB' 'KernelStack: 5128 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85104 kB' 'Slab: 220936 kB' 'SReclaimable: 85104 kB' 'SUnreclaim: 135832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.341 node0=512 expecting 512 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.341 node1=512 expecting 512 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.341 00:03:22.341 real 0m1.532s 00:03:22.341 user 0m0.663s 00:03:22.341 sys 0m0.813s 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:22.341 11:40:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.341 ************************************ 00:03:22.341 END TEST per_node_1G_alloc 00:03:22.341 ************************************ 00:03:22.599 11:40:11 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.599 11:40:11 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:22.599 11:40:11 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:22.599 11:40:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.599 ************************************ 00:03:22.599 START TEST even_2G_alloc 00:03:22.599 ************************************ 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.599 11:40:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.984 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.984 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.984 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.984 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.984 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.984 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.984 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.984 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.984 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.984 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.984 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.984 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.984 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.984 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.984 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.984 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.984 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.984 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46972668 kB' 'MemAvailable: 50428652 kB' 'Buffers: 2704 kB' 'Cached: 9254452 kB' 'SwapCached: 0 kB' 'Active: 6250932 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865716 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478208 kB' 'Mapped: 206108 kB' 'Shmem: 5390768 kB' 'KReclaimable: 166436 kB' 'Slab: 496756 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330320 kB' 'KernelStack: 12688 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6988076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46972704 kB' 'MemAvailable: 50428688 kB' 'Buffers: 2704 kB' 'Cached: 9254452 kB' 'SwapCached: 0 kB' 'Active: 6252840 kB' 'Inactive: 3481172 kB' 'Active(anon): 5867624 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480100 kB' 'Mapped: 206528 kB' 'Shmem: 5390768 kB' 'KReclaimable: 166436 kB' 'Slab: 496724 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330288 kB' 'KernelStack: 12736 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6990244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.985 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.986 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46973788 kB' 'MemAvailable: 50429772 kB' 'Buffers: 2704 kB' 'Cached: 9254456 kB' 'SwapCached: 0 kB' 'Active: 6254080 kB' 'Inactive: 3481172 kB' 'Active(anon): 5868864 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481372 kB' 'Mapped: 206092 kB' 'Shmem: 5390772 kB' 'KReclaimable: 166436 kB' 'Slab: 496704 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 330268 kB' 'KernelStack: 12720 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6980156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.987 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.988 nr_hugepages=1024 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.988 resv_hugepages=0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.988 surplus_hugepages=0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.988 anon_hugepages=0 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46969896 kB' 'MemAvailable: 50425876 kB' 'Buffers: 2704 kB' 'Cached: 9254496 kB' 'SwapCached: 0 kB' 'Active: 6247680 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862464 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474852 kB' 'Mapped: 205604 kB' 'Shmem: 5390812 kB' 'KReclaimable: 166428 kB' 'Slab: 496676 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330248 kB' 'KernelStack: 12672 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6975076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.988 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28775508 kB' 'MemUsed: 4054376 kB' 'SwapCached: 0 kB' 'Active: 1997680 kB' 'Inactive: 98144 kB' 'Active(anon): 1891680 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841768 kB' 'Mapped: 46444 kB' 'AnonPages: 257204 kB' 'Shmem: 1637624 kB' 'KernelStack: 7592 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81324 kB' 'Slab: 275756 kB' 'SReclaimable: 81324 kB' 'SUnreclaim: 194432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.989 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18190860 kB' 'MemUsed: 9520964 kB' 'SwapCached: 0 kB' 'Active: 4253540 kB' 'Inactive: 3383028 kB' 'Active(anon): 3974324 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3383028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7415468 kB' 'Mapped: 159292 kB' 'AnonPages: 221156 kB' 'Shmem: 3753224 kB' 'KernelStack: 5080 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85104 kB' 'Slab: 220920 kB' 'SReclaimable: 85104 kB' 'SUnreclaim: 135816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.990 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.991 node0=512 expecting 512 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.991 node1=512 expecting 512 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.991 00:03:23.991 real 0m1.521s 00:03:23.991 user 0m0.641s 00:03:23.991 sys 0m0.843s 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:23.991 11:40:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.991 ************************************ 00:03:23.991 END TEST even_2G_alloc 00:03:23.991 ************************************ 00:03:23.991 11:40:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.991 11:40:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:23.991 11:40:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:23.991 11:40:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.991 ************************************ 00:03:23.991 START TEST odd_alloc 00:03:23.991 ************************************ 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.991 11:40:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.369 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.369 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.369 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.369 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.369 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.369 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.369 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.369 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.369 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.369 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.369 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.369 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.369 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.369 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.369 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.369 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.369 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.369 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46960924 kB' 'MemAvailable: 50416904 kB' 'Buffers: 2704 kB' 'Cached: 9254588 kB' 'SwapCached: 0 kB' 'Active: 6248076 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862860 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475132 kB' 'Mapped: 205188 kB' 'Shmem: 5390904 kB' 'KReclaimable: 166428 kB' 'Slab: 496668 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330240 kB' 'KernelStack: 12624 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6974420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.370 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46962148 kB' 'MemAvailable: 50418128 kB' 'Buffers: 2704 kB' 'Cached: 9254588 kB' 'SwapCached: 0 kB' 'Active: 6248352 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863136 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475472 kB' 'Mapped: 205236 kB' 'Shmem: 5390904 kB' 'KReclaimable: 166428 kB' 'Slab: 496700 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330272 kB' 'KernelStack: 12688 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6974436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.371 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.372 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46962900 kB' 'MemAvailable: 50418880 kB' 'Buffers: 2704 kB' 'Cached: 9254608 kB' 'SwapCached: 0 kB' 'Active: 6247944 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862728 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475004 kB' 'Mapped: 205160 kB' 'Shmem: 5390924 kB' 'KReclaimable: 166428 kB' 'Slab: 496720 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330292 kB' 'KernelStack: 12672 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6974460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.373 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.374 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.375 nr_hugepages=1025 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.375 resv_hugepages=0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.375 surplus_hugepages=0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.375 anon_hugepages=0 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.375 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.376 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.376 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46963552 kB' 'MemAvailable: 50419532 kB' 'Buffers: 2704 kB' 'Cached: 9254628 kB' 'SwapCached: 0 kB' 'Active: 6247972 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862756 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475008 kB' 'Mapped: 205160 kB' 'Shmem: 5390944 kB' 'KReclaimable: 166428 kB' 'Slab: 496720 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330292 kB' 'KernelStack: 12672 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6974480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:25.376 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.376 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28768056 kB' 'MemUsed: 4061828 kB' 'SwapCached: 0 kB' 'Active: 1994032 kB' 'Inactive: 98144 kB' 'Active(anon): 1888032 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841768 kB' 'Mapped: 46008 kB' 'AnonPages: 253552 kB' 'Shmem: 1637624 kB' 'KernelStack: 7592 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81324 kB' 'Slab: 275720 kB' 'SReclaimable: 81324 kB' 'SUnreclaim: 194396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18195020 kB' 'MemUsed: 9516804 kB' 'SwapCached: 0 kB' 'Active: 4254356 kB' 'Inactive: 3383028 kB' 'Active(anon): 3975140 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3383028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7415604 kB' 'Mapped: 159092 kB' 'AnonPages: 221852 kB' 'Shmem: 3753360 kB' 'KernelStack: 5096 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85104 kB' 'Slab: 221000 kB' 'SReclaimable: 85104 kB' 'SUnreclaim: 135896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.640 node0=512 expecting 513 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.640 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.640 node1=513 expecting 512 00:03:25.641 11:40:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.641 00:03:25.641 real 0m1.497s 00:03:25.641 user 0m0.604s 00:03:25.641 sys 0m0.844s 00:03:25.641 11:40:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:25.641 11:40:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.641 ************************************ 00:03:25.641 END TEST odd_alloc 00:03:25.641 ************************************ 00:03:25.641 11:40:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.641 11:40:14 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:25.641 11:40:14 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:25.641 11:40:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.641 ************************************ 00:03:25.641 START TEST custom_alloc 00:03:25.641 ************************************ 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.641 11:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.028 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.028 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.028 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.028 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.028 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.028 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.028 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.028 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.028 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.028 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.028 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.028 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.028 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.028 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.028 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.028 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.028 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45907088 kB' 'MemAvailable: 49363068 kB' 'Buffers: 2704 kB' 'Cached: 9254720 kB' 'SwapCached: 0 kB' 'Active: 6247652 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862436 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474628 kB' 'Mapped: 205148 kB' 'Shmem: 5391036 kB' 'KReclaimable: 166428 kB' 'Slab: 496628 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330200 kB' 'KernelStack: 12624 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6974680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.029 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45907524 kB' 'MemAvailable: 49363504 kB' 'Buffers: 2704 kB' 'Cached: 9254720 kB' 'SwapCached: 0 kB' 'Active: 6248520 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863304 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475528 kB' 'Mapped: 205148 kB' 'Shmem: 5391036 kB' 'KReclaimable: 166428 kB' 'Slab: 496664 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330236 kB' 'KernelStack: 12688 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6974696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.030 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.031 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.032 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45907824 kB' 'MemAvailable: 49363804 kB' 'Buffers: 2704 kB' 'Cached: 9254740 kB' 'SwapCached: 0 kB' 'Active: 6248224 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863008 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475176 kB' 'Mapped: 205148 kB' 'Shmem: 5391056 kB' 'KReclaimable: 166428 kB' 'Slab: 496660 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330232 kB' 'KernelStack: 12672 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6974720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.033 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.034 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:27.035 nr_hugepages=1536 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.035 resv_hugepages=0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.035 surplus_hugepages=0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.035 anon_hugepages=0 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45907824 kB' 'MemAvailable: 49363804 kB' 'Buffers: 2704 kB' 'Cached: 9254760 kB' 'SwapCached: 0 kB' 'Active: 6248208 kB' 'Inactive: 3481172 kB' 'Active(anon): 5862992 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475180 kB' 'Mapped: 205148 kB' 'Shmem: 5391076 kB' 'KReclaimable: 166428 kB' 'Slab: 496660 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330232 kB' 'KernelStack: 12672 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6974740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.035 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.036 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28769108 kB' 'MemUsed: 4060776 kB' 'SwapCached: 0 kB' 'Active: 1994364 kB' 'Inactive: 98144 kB' 'Active(anon): 1888364 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841780 kB' 'Mapped: 45984 kB' 'AnonPages: 253888 kB' 'Shmem: 1637636 kB' 'KernelStack: 7592 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81324 kB' 'Slab: 275736 kB' 'SReclaimable: 81324 kB' 'SUnreclaim: 194412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.037 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.038 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17138976 kB' 'MemUsed: 10572848 kB' 'SwapCached: 0 kB' 'Active: 4253868 kB' 'Inactive: 3383028 kB' 'Active(anon): 3974652 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3383028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7415708 kB' 'Mapped: 159164 kB' 'AnonPages: 221288 kB' 'Shmem: 3753464 kB' 'KernelStack: 5080 kB' 'PageTables: 3140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85104 kB' 'Slab: 220924 kB' 'SReclaimable: 85104 kB' 'SUnreclaim: 135820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.039 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.040 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.041 node0=512 expecting 512 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:27.041 node1=1024 expecting 1024 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:27.041 00:03:27.041 real 0m1.509s 00:03:27.041 user 0m0.657s 00:03:27.041 sys 0m0.807s 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.041 11:40:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.041 ************************************ 00:03:27.041 END TEST custom_alloc 00:03:27.041 ************************************ 00:03:27.041 11:40:16 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:27.041 11:40:16 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.041 11:40:16 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.041 11:40:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.299 ************************************ 00:03:27.299 START TEST no_shrink_alloc 00:03:27.299 ************************************ 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.299 11:40:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.235 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.235 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.235 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.235 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.235 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.235 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.235 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.235 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.235 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.235 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.235 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.496 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.496 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.496 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.496 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.496 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.496 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.496 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46945432 kB' 'MemAvailable: 50401412 kB' 'Buffers: 2704 kB' 'Cached: 9254848 kB' 'SwapCached: 0 kB' 'Active: 6254572 kB' 'Inactive: 3481172 kB' 'Active(anon): 5869356 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481396 kB' 'Mapped: 206112 kB' 'Shmem: 5391164 kB' 'KReclaimable: 166428 kB' 'Slab: 496468 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330040 kB' 'KernelStack: 12736 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6982300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196020 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.497 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46944200 kB' 'MemAvailable: 50400180 kB' 'Buffers: 2704 kB' 'Cached: 9254848 kB' 'SwapCached: 0 kB' 'Active: 6250716 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865500 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477524 kB' 'Mapped: 206080 kB' 'Shmem: 5391164 kB' 'KReclaimable: 166428 kB' 'Slab: 496444 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330016 kB' 'KernelStack: 12976 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6981104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.498 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46943344 kB' 'MemAvailable: 50399324 kB' 'Buffers: 2704 kB' 'Cached: 9254852 kB' 'SwapCached: 0 kB' 'Active: 6254776 kB' 'Inactive: 3481172 kB' 'Active(anon): 5869560 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481584 kB' 'Mapped: 205680 kB' 'Shmem: 5391168 kB' 'KReclaimable: 166428 kB' 'Slab: 496476 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330048 kB' 'KernelStack: 13232 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6982100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.499 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.500 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.501 nr_hugepages=1024 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.501 resv_hugepages=0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.501 surplus_hugepages=0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.501 anon_hugepages=0 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46947280 kB' 'MemAvailable: 50403260 kB' 'Buffers: 2704 kB' 'Cached: 9254856 kB' 'SwapCached: 0 kB' 'Active: 6250656 kB' 'Inactive: 3481172 kB' 'Active(anon): 5865440 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477612 kB' 'Mapped: 205704 kB' 'Shmem: 5391172 kB' 'KReclaimable: 166428 kB' 'Slab: 496476 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330048 kB' 'KernelStack: 13040 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6976016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.501 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.761 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27715456 kB' 'MemUsed: 5114428 kB' 'SwapCached: 0 kB' 'Active: 1994580 kB' 'Inactive: 98144 kB' 'Active(anon): 1888580 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841788 kB' 'Mapped: 46368 kB' 'AnonPages: 254276 kB' 'Shmem: 1637644 kB' 'KernelStack: 7608 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81324 kB' 'Slab: 275528 kB' 'SReclaimable: 81324 kB' 'SUnreclaim: 194204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.762 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.763 node0=1024 expecting 1024 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.763 11:40:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.141 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.141 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.141 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.141 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.141 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.141 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.141 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.141 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.141 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.141 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.141 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.141 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.141 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.141 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.141 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.141 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.141 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.141 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.141 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.141 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.141 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46935312 kB' 'MemAvailable: 50391292 kB' 'Buffers: 2704 kB' 'Cached: 9254956 kB' 'SwapCached: 0 kB' 'Active: 6249312 kB' 'Inactive: 3481172 kB' 'Active(anon): 5864096 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476120 kB' 'Mapped: 205260 kB' 'Shmem: 5391272 kB' 'KReclaimable: 166428 kB' 'Slab: 496484 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330056 kB' 'KernelStack: 12752 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6975376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.142 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.143 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46936044 kB' 'MemAvailable: 50392024 kB' 'Buffers: 2704 kB' 'Cached: 9254960 kB' 'SwapCached: 0 kB' 'Active: 6248700 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863484 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475484 kB' 'Mapped: 205248 kB' 'Shmem: 5391276 kB' 'KReclaimable: 166428 kB' 'Slab: 496476 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330048 kB' 'KernelStack: 12704 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6975396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.144 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.145 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46936044 kB' 'MemAvailable: 50392024 kB' 'Buffers: 2704 kB' 'Cached: 9254960 kB' 'SwapCached: 0 kB' 'Active: 6248260 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863044 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475012 kB' 'Mapped: 205172 kB' 'Shmem: 5391276 kB' 'KReclaimable: 166428 kB' 'Slab: 496476 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330048 kB' 'KernelStack: 12688 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6975416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.146 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.147 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.148 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.149 nr_hugepages=1024 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.149 resv_hugepages=0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.149 surplus_hugepages=0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.149 anon_hugepages=0 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46936420 kB' 'MemAvailable: 50392400 kB' 'Buffers: 2704 kB' 'Cached: 9255000 kB' 'SwapCached: 0 kB' 'Active: 6248648 kB' 'Inactive: 3481172 kB' 'Active(anon): 5863432 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475360 kB' 'Mapped: 205172 kB' 'Shmem: 5391316 kB' 'KReclaimable: 166428 kB' 'Slab: 496476 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 330048 kB' 'KernelStack: 12720 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6975440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1486428 kB' 'DirectMap2M: 13113344 kB' 'DirectMap1G: 54525952 kB' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.149 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.150 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27716812 kB' 'MemUsed: 5113072 kB' 'SwapCached: 0 kB' 'Active: 1994404 kB' 'Inactive: 98144 kB' 'Active(anon): 1888404 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 98144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1841908 kB' 'Mapped: 45984 kB' 'AnonPages: 253884 kB' 'Shmem: 1637764 kB' 'KernelStack: 7608 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81324 kB' 'Slab: 275488 kB' 'SReclaimable: 81324 kB' 'SUnreclaim: 194164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.151 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.152 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.153 node0=1024 expecting 1024 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.153 00:03:30.153 real 0m3.014s 00:03:30.153 user 0m1.262s 00:03:30.153 sys 0m1.680s 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.153 11:40:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.153 ************************************ 00:03:30.153 END TEST no_shrink_alloc 00:03:30.153 ************************************ 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.153 11:40:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.153 00:03:30.153 real 0m12.016s 00:03:30.153 user 0m4.714s 00:03:30.153 sys 0m6.208s 00:03:30.153 11:40:19 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.153 11:40:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.153 ************************************ 00:03:30.153 END TEST hugepages 00:03:30.153 ************************************ 00:03:30.153 11:40:19 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.153 11:40:19 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.153 11:40:19 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.153 11:40:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.153 ************************************ 00:03:30.153 START TEST driver 00:03:30.153 ************************************ 00:03:30.153 11:40:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.412 * Looking for test storage... 00:03:30.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.412 11:40:19 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:30.412 11:40:19 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.412 11:40:19 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.984 11:40:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:32.984 11:40:22 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.984 11:40:22 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.984 11:40:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.984 ************************************ 00:03:32.984 START TEST guess_driver 00:03:32.984 ************************************ 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:32.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:32.984 Looking for driver=vfio-pci 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.984 11:40:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.360 11:40:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.297 11:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.298 11:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.298 11:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.556 11:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:35.556 11:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:35.556 11:40:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.556 11:40:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.091 00:03:38.091 real 0m5.046s 00:03:38.091 user 0m1.093s 00:03:38.091 sys 0m1.963s 00:03:38.092 11:40:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:38.092 11:40:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.092 ************************************ 00:03:38.092 END TEST guess_driver 00:03:38.092 ************************************ 00:03:38.092 00:03:38.092 real 0m7.752s 00:03:38.092 user 0m1.709s 00:03:38.092 sys 0m3.016s 00:03:38.092 11:40:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:38.092 11:40:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.092 ************************************ 00:03:38.092 END TEST driver 00:03:38.092 ************************************ 00:03:38.092 11:40:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.092 11:40:27 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:38.092 11:40:27 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:38.092 11:40:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.092 ************************************ 00:03:38.092 START TEST devices 00:03:38.092 ************************************ 00:03:38.092 11:40:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.092 * Looking for test storage... 00:03:38.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.092 11:40:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:38.092 11:40:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:38.092 11:40:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.092 11:40:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.468 11:40:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:39.468 11:40:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:39.468 11:40:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:39.468 11:40:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.468 No valid GPT data, bailing 00:03:39.728 11:40:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.728 11:40:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:39.728 11:40:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:39.728 11:40:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:39.728 11:40:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:39.728 11:40:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:39.728 11:40:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:39.728 11:40:28 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:39.728 11:40:28 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:39.728 11:40:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:39.728 ************************************ 00:03:39.728 START TEST nvme_mount 00:03:39.728 ************************************ 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.728 11:40:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.728 11:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.728 11:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:40.666 Creating new GPT entries in memory. 00:03:40.666 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:40.666 other utilities. 00:03:40.666 11:40:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:40.666 11:40:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.666 11:40:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.666 11:40:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.666 11:40:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.604 Creating new GPT entries in memory. 00:03:41.604 The operation has completed successfully. 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 791189 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:41.604 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.862 11:40:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.798 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.057 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.057 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.315 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:43.315 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:43.315 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.315 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.315 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.573 11:40:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.520 11:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.780 11:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.160 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.160 00:03:46.160 real 0m6.483s 00:03:46.160 user 0m1.555s 00:03:46.160 sys 0m2.530s 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.160 11:40:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.160 ************************************ 00:03:46.160 END TEST nvme_mount 00:03:46.160 ************************************ 00:03:46.160 11:40:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.160 11:40:35 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.160 11:40:35 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.160 11:40:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.160 ************************************ 00:03:46.160 START TEST dm_mount 00:03:46.160 ************************************ 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.160 11:40:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.100 Creating new GPT entries in memory. 00:03:47.100 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.100 other utilities. 00:03:47.100 11:40:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.100 11:40:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.100 11:40:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.100 11:40:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.100 11:40:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.477 Creating new GPT entries in memory. 00:03:48.477 The operation has completed successfully. 00:03:48.477 11:40:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.477 11:40:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.477 11:40:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.477 11:40:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.477 11:40:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:49.411 The operation has completed successfully. 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 793582 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.411 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.412 11:40:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.347 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:50.604 11:40:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.604 11:40:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.980 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:51.981 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.981 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:51.981 11:40:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:51.981 00:03:51.981 real 0m5.820s 00:03:51.981 user 0m0.983s 00:03:51.981 sys 0m1.682s 00:03:51.981 11:40:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:51.981 11:40:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:51.981 ************************************ 00:03:51.981 END TEST dm_mount 00:03:51.981 ************************************ 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.981 11:40:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.276 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.276 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.276 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.276 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.276 11:40:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.276 00:03:52.276 real 0m14.218s 00:03:52.276 user 0m3.196s 00:03:52.276 sys 0m5.235s 00:03:52.276 11:40:41 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:52.276 11:40:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.276 ************************************ 00:03:52.276 END TEST devices 00:03:52.276 ************************************ 00:03:52.276 00:03:52.276 real 0m45.062s 00:03:52.276 user 0m13.050s 00:03:52.276 sys 0m20.066s 00:03:52.276 11:40:41 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:52.276 11:40:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.276 ************************************ 00:03:52.276 END TEST setup.sh 00:03:52.276 ************************************ 00:03:52.276 11:40:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.652 Hugepages 00:03:53.652 node hugesize free / total 00:03:53.652 node0 1048576kB 0 / 0 00:03:53.652 node0 2048kB 2048 / 2048 00:03:53.652 node1 1048576kB 0 / 0 00:03:53.652 node1 2048kB 0 / 0 00:03:53.652 00:03:53.652 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.652 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:53.652 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:53.652 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:53.652 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.652 11:40:42 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.652 11:40:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.652 11:40:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.652 11:40:42 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.028 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.028 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.028 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.968 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.968 11:40:45 -- common/autotest_common.sh@1531 -- # sleep 1 00:03:57.344 11:40:46 -- common/autotest_common.sh@1532 -- # bdfs=() 00:03:57.344 11:40:46 -- common/autotest_common.sh@1532 -- # local bdfs 00:03:57.344 11:40:46 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:03:57.344 11:40:46 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:03:57.344 11:40:46 -- common/autotest_common.sh@1512 -- # bdfs=() 00:03:57.344 11:40:46 -- common/autotest_common.sh@1512 -- # local bdfs 00:03:57.344 11:40:46 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.344 11:40:46 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:57.344 11:40:46 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:03:57.344 11:40:46 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:03:57.344 11:40:46 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:88:00.0 00:03:57.344 11:40:46 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.279 Waiting for block devices as requested 00:03:58.279 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.538 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.538 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.538 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.796 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.796 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:58.796 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:58.796 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:59.055 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:59.055 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:59.055 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:59.313 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:59.313 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:59.313 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:59.313 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:59.570 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:59.570 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:59.570 11:40:49 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:03:59.570 11:40:49 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1501 -- # grep 0000:88:00.0/nvme/nvme 00:03:59.571 11:40:49 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:59.571 11:40:49 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:03:59.571 11:40:49 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1544 -- # grep oacs 00:03:59.571 11:40:49 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:03:59.571 11:40:49 -- common/autotest_common.sh@1544 -- # oacs=' 0xf' 00:03:59.571 11:40:49 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:03:59.571 11:40:49 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:03:59.571 11:40:49 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:03:59.571 11:40:49 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:03:59.571 11:40:49 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:03:59.571 11:40:49 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:03:59.571 11:40:49 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:03:59.571 11:40:49 -- common/autotest_common.sh@1556 -- # continue 00:03:59.571 11:40:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:59.571 11:40:49 -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:59.571 11:40:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.828 11:40:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:59.828 11:40:49 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:59.828 11:40:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.828 11:40:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.765 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.765 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.022 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:01.022 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:01.022 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:01.023 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:01.023 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:01.023 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:01.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:01.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.956 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.956 11:40:51 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:01.956 11:40:51 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:01.956 11:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:01.956 11:40:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:01.956 11:40:51 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:01.956 11:40:51 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.956 11:40:51 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:01.956 11:40:51 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:01.956 11:40:51 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:01.956 11:40:51 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:01.956 11:40:51 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:01.956 11:40:51 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.956 11:40:51 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.956 11:40:51 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:01.956 11:40:51 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:01.956 11:40:51 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:88:00.0 00:04:01.956 11:40:51 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:01.956 11:40:51 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:01.956 11:40:51 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:04:01.956 11:40:51 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.956 11:40:51 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:04:01.956 11:40:51 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:88:00.0 00:04:01.956 11:40:51 -- common/autotest_common.sh@1591 -- # [[ -z 0000:88:00.0 ]] 00:04:01.956 11:40:51 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=798882 00:04:01.956 11:40:51 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.956 11:40:51 -- common/autotest_common.sh@1597 -- # waitforlisten 798882 00:04:01.956 11:40:51 -- common/autotest_common.sh@830 -- # '[' -z 798882 ']' 00:04:01.956 11:40:51 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.956 11:40:51 -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:01.956 11:40:51 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.956 11:40:51 -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:01.956 11:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:02.213 [2024-07-12 11:40:51.482434] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:02.213 [2024-07-12 11:40:51.482504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798882 ] 00:04:02.213 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.213 [2024-07-12 11:40:51.538808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.213 [2024-07-12 11:40:51.638931] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.471 11:40:51 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:02.471 11:40:51 -- common/autotest_common.sh@863 -- # return 0 00:04:02.471 11:40:51 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:04:02.471 11:40:51 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:04:02.471 11:40:51 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:05.752 nvme0n1 00:04:05.752 11:40:54 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:05.752 [2024-07-12 11:40:55.185748] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:05.752 [2024-07-12 11:40:55.185784] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:05.752 request: 00:04:05.752 { 00:04:05.752 "nvme_ctrlr_name": "nvme0", 00:04:05.753 "password": "test", 00:04:05.753 "method": "bdev_nvme_opal_revert", 00:04:05.753 "req_id": 1 00:04:05.753 } 00:04:05.753 Got JSON-RPC error response 00:04:05.753 response: 00:04:05.753 { 00:04:05.753 "code": -32603, 00:04:05.753 "message": "Internal error" 00:04:05.753 } 00:04:05.753 11:40:55 -- common/autotest_common.sh@1603 -- # true 00:04:05.753 11:40:55 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:04:05.753 11:40:55 -- common/autotest_common.sh@1607 -- # killprocess 798882 00:04:05.753 11:40:55 -- common/autotest_common.sh@949 -- # '[' -z 798882 ']' 00:04:05.753 11:40:55 -- common/autotest_common.sh@953 -- # kill -0 798882 00:04:05.753 11:40:55 -- common/autotest_common.sh@954 -- # uname 00:04:05.753 11:40:55 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:05.753 11:40:55 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 798882 00:04:05.753 11:40:55 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:05.753 11:40:55 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:05.753 11:40:55 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 798882' 00:04:05.753 killing process with pid 798882 00:04:05.753 11:40:55 -- common/autotest_common.sh@968 -- # kill 798882 00:04:05.753 11:40:55 -- common/autotest_common.sh@973 -- # wait 798882 00:04:07.645 11:40:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:07.645 11:40:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:07.645 11:40:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.645 11:40:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.645 11:40:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:07.645 11:40:57 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:07.645 11:40:57 -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 11:40:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:07.645 11:40:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.645 11:40:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.645 11:40:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.645 11:40:57 -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 ************************************ 00:04:07.645 START TEST env 00:04:07.645 ************************************ 00:04:07.645 11:40:57 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.645 * Looking for test storage... 00:04:07.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:07.645 11:40:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.645 11:40:57 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.645 11:40:57 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.645 11:40:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 ************************************ 00:04:07.645 START TEST env_memory 00:04:07.645 ************************************ 00:04:07.645 11:40:57 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.645 00:04:07.645 00:04:07.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.645 http://cunit.sourceforge.net/ 00:04:07.645 00:04:07.645 00:04:07.645 Suite: memory 00:04:07.902 Test: alloc and free memory map ...[2024-07-12 11:40:57.148574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.902 passed 00:04:07.902 Test: mem map translation ...[2024-07-12 11:40:57.168228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.902 [2024-07-12 11:40:57.168248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.902 [2024-07-12 11:40:57.168305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.902 [2024-07-12 11:40:57.168318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.902 passed 00:04:07.902 Test: mem map registration ...[2024-07-12 11:40:57.208661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:07.902 [2024-07-12 11:40:57.208680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:07.902 passed 00:04:07.902 Test: mem map adjacent registrations ...passed 00:04:07.902 00:04:07.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.902 suites 1 1 n/a 0 0 00:04:07.902 tests 4 4 4 0 0 00:04:07.902 asserts 152 152 152 0 n/a 00:04:07.902 00:04:07.902 Elapsed time = 0.139 seconds 00:04:07.902 00:04:07.902 real 0m0.147s 00:04:07.902 user 0m0.139s 00:04:07.902 sys 0m0.007s 00:04:07.902 11:40:57 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:07.902 11:40:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.902 ************************************ 00:04:07.902 END TEST env_memory 00:04:07.902 ************************************ 00:04:07.902 11:40:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.902 11:40:57 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.902 11:40:57 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.902 11:40:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.902 ************************************ 00:04:07.902 START TEST env_vtophys 00:04:07.902 ************************************ 00:04:07.902 11:40:57 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.902 EAL: lib.eal log level changed from notice to debug 00:04:07.902 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.902 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.902 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.902 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.902 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.902 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.902 EAL: Detected lcore 6 as core 8 on socket 0 00:04:07.902 EAL: Detected lcore 7 as core 9 on socket 0 00:04:07.902 EAL: Detected lcore 8 as core 10 on socket 0 00:04:07.902 EAL: Detected lcore 9 as core 11 on socket 0 00:04:07.902 EAL: Detected lcore 10 as core 12 on socket 0 00:04:07.902 EAL: Detected lcore 11 as core 13 on socket 0 00:04:07.902 EAL: Detected lcore 12 as core 0 on socket 1 00:04:07.902 EAL: Detected lcore 13 as core 1 on socket 1 00:04:07.902 EAL: Detected lcore 14 as core 2 on socket 1 00:04:07.902 EAL: Detected lcore 15 as core 3 on socket 1 00:04:07.902 EAL: Detected lcore 16 as core 4 on socket 1 00:04:07.902 EAL: Detected lcore 17 as core 5 on socket 1 00:04:07.902 EAL: Detected lcore 18 as core 8 on socket 1 00:04:07.902 EAL: Detected lcore 19 as core 9 on socket 1 00:04:07.902 EAL: Detected lcore 20 as core 10 on socket 1 00:04:07.902 EAL: Detected lcore 21 as core 11 on socket 1 00:04:07.902 EAL: Detected lcore 22 as core 12 on socket 1 00:04:07.902 EAL: Detected lcore 23 as core 13 on socket 1 00:04:07.902 EAL: Detected lcore 24 as core 0 on socket 0 00:04:07.902 EAL: Detected lcore 25 as core 1 on socket 0 00:04:07.902 EAL: Detected lcore 26 as core 2 on socket 0 00:04:07.902 EAL: Detected lcore 27 as core 3 on socket 0 00:04:07.902 EAL: Detected lcore 28 as core 4 on socket 0 00:04:07.902 EAL: Detected lcore 29 as core 5 on socket 0 00:04:07.902 EAL: Detected lcore 30 as core 8 on socket 0 00:04:07.902 EAL: Detected lcore 31 as core 9 on socket 0 00:04:07.902 EAL: Detected lcore 32 as core 10 on socket 0 00:04:07.902 EAL: Detected lcore 33 as core 11 on socket 0 00:04:07.902 EAL: Detected lcore 34 as core 12 on socket 0 00:04:07.902 EAL: Detected lcore 35 as core 13 on socket 0 00:04:07.902 EAL: Detected lcore 36 as core 0 on socket 1 00:04:07.902 EAL: Detected lcore 37 as core 1 on socket 1 00:04:07.902 EAL: Detected lcore 38 as core 2 on socket 1 00:04:07.902 EAL: Detected lcore 39 as core 3 on socket 1 00:04:07.902 EAL: Detected lcore 40 as core 4 on socket 1 00:04:07.902 EAL: Detected lcore 41 as core 5 on socket 1 00:04:07.902 EAL: Detected lcore 42 as core 8 on socket 1 00:04:07.902 EAL: Detected lcore 43 as core 9 on socket 1 00:04:07.902 EAL: Detected lcore 44 as core 10 on socket 1 00:04:07.902 EAL: Detected lcore 45 as core 11 on socket 1 00:04:07.902 EAL: Detected lcore 46 as core 12 on socket 1 00:04:07.902 EAL: Detected lcore 47 as core 13 on socket 1 00:04:07.902 EAL: Maximum logical cores by configuration: 128 00:04:07.902 EAL: Detected CPU lcores: 48 00:04:07.902 EAL: Detected NUMA nodes: 2 00:04:07.902 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.902 EAL: Detected shared linkage of DPDK 00:04:07.902 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.902 EAL: Bus pci wants IOVA as 'DC' 00:04:07.902 EAL: Buses did not request a specific IOVA mode. 00:04:07.902 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.902 EAL: Selected IOVA mode 'VA' 00:04:07.902 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.902 EAL: Probing VFIO support... 00:04:07.902 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.902 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.902 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.902 EAL: VFIO support initialized 00:04:07.902 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.902 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.902 EAL: Setting up physically contiguous memory... 00:04:07.902 EAL: Setting maximum number of open files to 524288 00:04:07.902 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.902 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.902 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.902 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.902 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.902 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.902 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.902 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.902 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.902 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.902 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.902 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.902 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.902 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.902 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.902 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.902 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.902 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.902 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.902 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.902 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.902 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.902 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.902 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.902 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.902 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.902 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.902 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.902 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.902 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.902 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.902 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.902 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.903 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.903 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.903 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.903 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.903 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.903 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.903 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.903 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.903 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.903 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.903 EAL: Hugepages will be freed exactly as allocated. 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: TSC frequency is ~2700000 KHz 00:04:07.903 EAL: Main lcore 0 is ready (tid=7fb851063a00;cpuset=[0]) 00:04:07.903 EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.903 EAL: Restoring previous memory policy: 0 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.903 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.903 00:04:07.903 00:04:07.903 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.903 http://cunit.sourceforge.net/ 00:04:07.903 00:04:07.903 00:04:07.903 Suite: components_suite 00:04:07.903 Test: vtophys_malloc_test ...passed 00:04:07.903 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.903 EAL: Restoring previous memory policy: 4 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.903 EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.903 EAL: Restoring previous memory policy: 4 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.903 EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.903 EAL: Restoring previous memory policy: 4 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.903 EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.903 EAL: Restoring previous memory policy: 4 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.903 EAL: request: mp_malloc_sync 00:04:07.903 EAL: No shared files mode enabled, IPC is disabled 00:04:07.903 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.903 EAL: Trying to obtain current memory policy. 00:04:07.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.159 EAL: Restoring previous memory policy: 4 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.159 EAL: Trying to obtain current memory policy. 00:04:08.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.159 EAL: Restoring previous memory policy: 4 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.159 EAL: Trying to obtain current memory policy. 00:04:08.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.159 EAL: Restoring previous memory policy: 4 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was expanded by 130MB 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.159 EAL: Trying to obtain current memory policy. 00:04:08.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.159 EAL: Restoring previous memory policy: 4 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.159 EAL: request: mp_malloc_sync 00:04:08.159 EAL: No shared files mode enabled, IPC is disabled 00:04:08.159 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.417 EAL: request: mp_malloc_sync 00:04:08.417 EAL: No shared files mode enabled, IPC is disabled 00:04:08.417 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.417 EAL: Trying to obtain current memory policy. 00:04:08.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.417 EAL: Restoring previous memory policy: 4 00:04:08.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.417 EAL: request: mp_malloc_sync 00:04:08.417 EAL: No shared files mode enabled, IPC is disabled 00:04:08.417 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.675 EAL: request: mp_malloc_sync 00:04:08.675 EAL: No shared files mode enabled, IPC is disabled 00:04:08.675 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.675 EAL: Trying to obtain current memory policy. 00:04:08.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.933 EAL: Restoring previous memory policy: 4 00:04:08.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.933 EAL: request: mp_malloc_sync 00:04:08.933 EAL: No shared files mode enabled, IPC is disabled 00:04:08.933 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.483 EAL: request: mp_malloc_sync 00:04:09.483 EAL: No shared files mode enabled, IPC is disabled 00:04:09.483 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.483 passed 00:04:09.483 00:04:09.483 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.483 suites 1 1 n/a 0 0 00:04:09.483 tests 2 2 2 0 0 00:04:09.483 asserts 497 497 497 0 n/a 00:04:09.483 00:04:09.483 Elapsed time = 1.311 seconds 00:04:09.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.483 EAL: request: mp_malloc_sync 00:04:09.483 EAL: No shared files mode enabled, IPC is disabled 00:04:09.483 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.483 EAL: No shared files mode enabled, IPC is disabled 00:04:09.483 EAL: No shared files mode enabled, IPC is disabled 00:04:09.483 EAL: No shared files mode enabled, IPC is disabled 00:04:09.483 00:04:09.483 real 0m1.421s 00:04:09.483 user 0m0.828s 00:04:09.483 sys 0m0.561s 00:04:09.483 11:40:58 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.483 11:40:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.483 ************************************ 00:04:09.483 END TEST env_vtophys 00:04:09.483 ************************************ 00:04:09.483 11:40:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.483 11:40:58 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:09.483 11:40:58 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.483 11:40:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.483 ************************************ 00:04:09.483 START TEST env_pci 00:04:09.483 ************************************ 00:04:09.483 11:40:58 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.483 00:04:09.483 00:04:09.483 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.483 http://cunit.sourceforge.net/ 00:04:09.483 00:04:09.483 00:04:09.483 Suite: pci 00:04:09.483 Test: pci_hook ...[2024-07-12 11:40:58.787219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 799770 has claimed it 00:04:09.483 EAL: Cannot find device (10000:00:01.0) 00:04:09.483 EAL: Failed to attach device on primary process 00:04:09.483 passed 00:04:09.483 00:04:09.483 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.483 suites 1 1 n/a 0 0 00:04:09.483 tests 1 1 1 0 0 00:04:09.483 asserts 25 25 25 0 n/a 00:04:09.483 00:04:09.483 Elapsed time = 0.022 seconds 00:04:09.483 00:04:09.483 real 0m0.036s 00:04:09.483 user 0m0.013s 00:04:09.483 sys 0m0.023s 00:04:09.483 11:40:58 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.483 11:40:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.483 ************************************ 00:04:09.483 END TEST env_pci 00:04:09.483 ************************************ 00:04:09.483 11:40:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.483 11:40:58 env -- env/env.sh@15 -- # uname 00:04:09.483 11:40:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.483 11:40:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.483 11:40:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.483 11:40:58 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:09.483 11:40:58 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.483 11:40:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.483 ************************************ 00:04:09.483 START TEST env_dpdk_post_init 00:04:09.483 ************************************ 00:04:09.483 11:40:58 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.483 EAL: Detected CPU lcores: 48 00:04:09.483 EAL: Detected NUMA nodes: 2 00:04:09.483 EAL: Detected shared linkage of DPDK 00:04:09.483 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.483 EAL: Selected IOVA mode 'VA' 00:04:09.483 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.483 EAL: VFIO support initialized 00:04:09.483 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.483 EAL: Using IOMMU type 1 (Type 1) 00:04:09.483 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:09.743 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:10.686 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:14.019 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:14.019 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:14.019 Starting DPDK initialization... 00:04:14.019 Starting SPDK post initialization... 00:04:14.019 SPDK NVMe probe 00:04:14.019 Attaching to 0000:88:00.0 00:04:14.019 Attached to 0000:88:00.0 00:04:14.019 Cleaning up... 00:04:14.019 00:04:14.019 real 0m4.384s 00:04:14.019 user 0m3.264s 00:04:14.019 sys 0m0.177s 00:04:14.019 11:41:03 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.019 11:41:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.019 ************************************ 00:04:14.019 END TEST env_dpdk_post_init 00:04:14.019 ************************************ 00:04:14.019 11:41:03 env -- env/env.sh@26 -- # uname 00:04:14.019 11:41:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.019 11:41:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.019 11:41:03 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.019 11:41:03 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.019 11:41:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.019 ************************************ 00:04:14.019 START TEST env_mem_callbacks 00:04:14.019 ************************************ 00:04:14.019 11:41:03 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.019 EAL: Detected CPU lcores: 48 00:04:14.019 EAL: Detected NUMA nodes: 2 00:04:14.019 EAL: Detected shared linkage of DPDK 00:04:14.019 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.019 EAL: Selected IOVA mode 'VA' 00:04:14.019 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.019 EAL: VFIO support initialized 00:04:14.019 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.019 00:04:14.019 00:04:14.019 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.019 http://cunit.sourceforge.net/ 00:04:14.019 00:04:14.019 00:04:14.019 Suite: memory 00:04:14.019 Test: test ... 00:04:14.019 register 0x200000200000 2097152 00:04:14.019 malloc 3145728 00:04:14.019 register 0x200000400000 4194304 00:04:14.019 buf 0x200000500000 len 3145728 PASSED 00:04:14.019 malloc 64 00:04:14.019 buf 0x2000004fff40 len 64 PASSED 00:04:14.019 malloc 4194304 00:04:14.019 register 0x200000800000 6291456 00:04:14.019 buf 0x200000a00000 len 4194304 PASSED 00:04:14.019 free 0x200000500000 3145728 00:04:14.019 free 0x2000004fff40 64 00:04:14.019 unregister 0x200000400000 4194304 PASSED 00:04:14.019 free 0x200000a00000 4194304 00:04:14.019 unregister 0x200000800000 6291456 PASSED 00:04:14.019 malloc 8388608 00:04:14.019 register 0x200000400000 10485760 00:04:14.019 buf 0x200000600000 len 8388608 PASSED 00:04:14.019 free 0x200000600000 8388608 00:04:14.019 unregister 0x200000400000 10485760 PASSED 00:04:14.019 passed 00:04:14.019 00:04:14.019 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.019 suites 1 1 n/a 0 0 00:04:14.019 tests 1 1 1 0 0 00:04:14.019 asserts 15 15 15 0 n/a 00:04:14.019 00:04:14.019 Elapsed time = 0.004 seconds 00:04:14.019 00:04:14.019 real 0m0.049s 00:04:14.019 user 0m0.014s 00:04:14.019 sys 0m0.034s 00:04:14.019 11:41:03 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.019 11:41:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.019 ************************************ 00:04:14.019 END TEST env_mem_callbacks 00:04:14.019 ************************************ 00:04:14.019 00:04:14.019 real 0m6.329s 00:04:14.019 user 0m4.380s 00:04:14.019 sys 0m0.990s 00:04:14.019 11:41:03 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.019 11:41:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.019 ************************************ 00:04:14.019 END TEST env 00:04:14.019 ************************************ 00:04:14.019 11:41:03 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.019 11:41:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.019 11:41:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.019 11:41:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.019 ************************************ 00:04:14.019 START TEST rpc 00:04:14.019 ************************************ 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.020 * Looking for test storage... 00:04:14.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.020 11:41:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=800421 00:04:14.020 11:41:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:14.020 11:41:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.020 11:41:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 800421 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@830 -- # '[' -z 800421 ']' 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:14.020 11:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.277 [2024-07-12 11:41:03.517245] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:14.277 [2024-07-12 11:41:03.517319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800421 ] 00:04:14.277 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.278 [2024-07-12 11:41:03.574629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.278 [2024-07-12 11:41:03.680331] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.278 [2024-07-12 11:41:03.680384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 800421' to capture a snapshot of events at runtime. 00:04:14.278 [2024-07-12 11:41:03.680397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.278 [2024-07-12 11:41:03.680408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.278 [2024-07-12 11:41:03.680418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid800421 for offline analysis/debug. 00:04:14.278 [2024-07-12 11:41:03.680445] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.536 11:41:03 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:14.536 11:41:03 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:14.536 11:41:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.536 11:41:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.536 11:41:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.536 11:41:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.536 11:41:03 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.536 11:41:03 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.536 11:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.536 ************************************ 00:04:14.536 START TEST rpc_integrity 00:04:14.536 ************************************ 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:14.536 11:41:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.536 11:41:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.536 11:41:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.536 11:41:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.536 11:41:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.536 11:41:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.536 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.536 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.536 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.536 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.536 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.536 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.536 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.536 { 00:04:14.536 "name": "Malloc0", 00:04:14.536 "aliases": [ 00:04:14.536 "50d21ce8-3e80-46c0-8112-15aa7e339153" 00:04:14.536 ], 00:04:14.536 "product_name": "Malloc disk", 00:04:14.536 "block_size": 512, 00:04:14.536 "num_blocks": 16384, 00:04:14.536 "uuid": "50d21ce8-3e80-46c0-8112-15aa7e339153", 00:04:14.536 "assigned_rate_limits": { 00:04:14.536 "rw_ios_per_sec": 0, 00:04:14.536 "rw_mbytes_per_sec": 0, 00:04:14.536 "r_mbytes_per_sec": 0, 00:04:14.536 "w_mbytes_per_sec": 0 00:04:14.536 }, 00:04:14.536 "claimed": false, 00:04:14.536 "zoned": false, 00:04:14.536 "supported_io_types": { 00:04:14.536 "read": true, 00:04:14.536 "write": true, 00:04:14.536 "unmap": true, 00:04:14.536 "write_zeroes": true, 00:04:14.536 "flush": true, 00:04:14.536 "reset": true, 00:04:14.536 "compare": false, 00:04:14.536 "compare_and_write": false, 00:04:14.536 "abort": true, 00:04:14.536 "nvme_admin": false, 00:04:14.536 "nvme_io": false 00:04:14.536 }, 00:04:14.536 "memory_domains": [ 00:04:14.536 { 00:04:14.536 "dma_device_id": "system", 00:04:14.536 "dma_device_type": 1 00:04:14.536 }, 00:04:14.536 { 00:04:14.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.536 "dma_device_type": 2 00:04:14.536 } 00:04:14.536 ], 00:04:14.536 "driver_specific": {} 00:04:14.536 } 00:04:14.536 ]' 00:04:14.536 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.794 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.794 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.794 [2024-07-12 11:41:04.056386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.794 [2024-07-12 11:41:04.056424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.794 [2024-07-12 11:41:04.056444] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x186fda0 00:04:14.794 [2024-07-12 11:41:04.056456] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.794 [2024-07-12 11:41:04.057781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.794 [2024-07-12 11:41:04.057804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.794 Passthru0 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.794 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.794 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.794 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.794 { 00:04:14.794 "name": "Malloc0", 00:04:14.794 "aliases": [ 00:04:14.794 "50d21ce8-3e80-46c0-8112-15aa7e339153" 00:04:14.794 ], 00:04:14.794 "product_name": "Malloc disk", 00:04:14.794 "block_size": 512, 00:04:14.794 "num_blocks": 16384, 00:04:14.794 "uuid": "50d21ce8-3e80-46c0-8112-15aa7e339153", 00:04:14.794 "assigned_rate_limits": { 00:04:14.794 "rw_ios_per_sec": 0, 00:04:14.794 "rw_mbytes_per_sec": 0, 00:04:14.794 "r_mbytes_per_sec": 0, 00:04:14.794 "w_mbytes_per_sec": 0 00:04:14.794 }, 00:04:14.794 "claimed": true, 00:04:14.794 "claim_type": "exclusive_write", 00:04:14.794 "zoned": false, 00:04:14.794 "supported_io_types": { 00:04:14.794 "read": true, 00:04:14.794 "write": true, 00:04:14.794 "unmap": true, 00:04:14.794 "write_zeroes": true, 00:04:14.794 "flush": true, 00:04:14.794 "reset": true, 00:04:14.794 "compare": false, 00:04:14.794 "compare_and_write": false, 00:04:14.794 "abort": true, 00:04:14.794 "nvme_admin": false, 00:04:14.794 "nvme_io": false 00:04:14.794 }, 00:04:14.794 "memory_domains": [ 00:04:14.794 { 00:04:14.794 "dma_device_id": "system", 00:04:14.794 "dma_device_type": 1 00:04:14.794 }, 00:04:14.795 { 00:04:14.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.795 "dma_device_type": 2 00:04:14.795 } 00:04:14.795 ], 00:04:14.795 "driver_specific": {} 00:04:14.795 }, 00:04:14.795 { 00:04:14.795 "name": "Passthru0", 00:04:14.795 "aliases": [ 00:04:14.795 "f88f091d-caf3-5b6c-ba88-5e8a14b27d72" 00:04:14.795 ], 00:04:14.795 "product_name": "passthru", 00:04:14.795 "block_size": 512, 00:04:14.795 "num_blocks": 16384, 00:04:14.795 "uuid": "f88f091d-caf3-5b6c-ba88-5e8a14b27d72", 00:04:14.795 "assigned_rate_limits": { 00:04:14.795 "rw_ios_per_sec": 0, 00:04:14.795 "rw_mbytes_per_sec": 0, 00:04:14.795 "r_mbytes_per_sec": 0, 00:04:14.795 "w_mbytes_per_sec": 0 00:04:14.795 }, 00:04:14.795 "claimed": false, 00:04:14.795 "zoned": false, 00:04:14.795 "supported_io_types": { 00:04:14.795 "read": true, 00:04:14.795 "write": true, 00:04:14.795 "unmap": true, 00:04:14.795 "write_zeroes": true, 00:04:14.795 "flush": true, 00:04:14.795 "reset": true, 00:04:14.795 "compare": false, 00:04:14.795 "compare_and_write": false, 00:04:14.795 "abort": true, 00:04:14.795 "nvme_admin": false, 00:04:14.795 "nvme_io": false 00:04:14.795 }, 00:04:14.795 "memory_domains": [ 00:04:14.795 { 00:04:14.795 "dma_device_id": "system", 00:04:14.795 "dma_device_type": 1 00:04:14.795 }, 00:04:14.795 { 00:04:14.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.795 "dma_device_type": 2 00:04:14.795 } 00:04:14.795 ], 00:04:14.795 "driver_specific": { 00:04:14.795 "passthru": { 00:04:14.795 "name": "Passthru0", 00:04:14.795 "base_bdev_name": "Malloc0" 00:04:14.795 } 00:04:14.795 } 00:04:14.795 } 00:04:14.795 ]' 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.795 11:41:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.795 00:04:14.795 real 0m0.220s 00:04:14.795 user 0m0.138s 00:04:14.795 sys 0m0.022s 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 ************************************ 00:04:14.795 END TEST rpc_integrity 00:04:14.795 ************************************ 00:04:14.795 11:41:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.795 11:41:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.795 11:41:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.795 11:41:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 ************************************ 00:04:14.795 START TEST rpc_plugins 00:04:14.795 ************************************ 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.795 { 00:04:14.795 "name": "Malloc1", 00:04:14.795 "aliases": [ 00:04:14.795 "070126ec-e803-4492-91d9-a00b438e992c" 00:04:14.795 ], 00:04:14.795 "product_name": "Malloc disk", 00:04:14.795 "block_size": 4096, 00:04:14.795 "num_blocks": 256, 00:04:14.795 "uuid": "070126ec-e803-4492-91d9-a00b438e992c", 00:04:14.795 "assigned_rate_limits": { 00:04:14.795 "rw_ios_per_sec": 0, 00:04:14.795 "rw_mbytes_per_sec": 0, 00:04:14.795 "r_mbytes_per_sec": 0, 00:04:14.795 "w_mbytes_per_sec": 0 00:04:14.795 }, 00:04:14.795 "claimed": false, 00:04:14.795 "zoned": false, 00:04:14.795 "supported_io_types": { 00:04:14.795 "read": true, 00:04:14.795 "write": true, 00:04:14.795 "unmap": true, 00:04:14.795 "write_zeroes": true, 00:04:14.795 "flush": true, 00:04:14.795 "reset": true, 00:04:14.795 "compare": false, 00:04:14.795 "compare_and_write": false, 00:04:14.795 "abort": true, 00:04:14.795 "nvme_admin": false, 00:04:14.795 "nvme_io": false 00:04:14.795 }, 00:04:14.795 "memory_domains": [ 00:04:14.795 { 00:04:14.795 "dma_device_id": "system", 00:04:14.795 "dma_device_type": 1 00:04:14.795 }, 00:04:14.795 { 00:04:14.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.795 "dma_device_type": 2 00:04:14.795 } 00:04:14.795 ], 00:04:14.795 "driver_specific": {} 00:04:14.795 } 00:04:14.795 ]' 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.795 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.795 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.053 11:41:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.053 00:04:15.053 real 0m0.107s 00:04:15.053 user 0m0.071s 00:04:15.053 sys 0m0.007s 00:04:15.053 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.053 11:41:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 END TEST rpc_plugins 00:04:15.053 ************************************ 00:04:15.053 11:41:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.053 11:41:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.053 11:41:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.053 11:41:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 START TEST rpc_trace_cmd_test 00:04:15.053 ************************************ 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.053 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid800421", 00:04:15.053 "tpoint_group_mask": "0x8", 00:04:15.053 "iscsi_conn": { 00:04:15.053 "mask": "0x2", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "scsi": { 00:04:15.053 "mask": "0x4", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "bdev": { 00:04:15.053 "mask": "0x8", 00:04:15.053 "tpoint_mask": "0xffffffffffffffff" 00:04:15.053 }, 00:04:15.053 "nvmf_rdma": { 00:04:15.053 "mask": "0x10", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "nvmf_tcp": { 00:04:15.053 "mask": "0x20", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "ftl": { 00:04:15.053 "mask": "0x40", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "blobfs": { 00:04:15.053 "mask": "0x80", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "dsa": { 00:04:15.053 "mask": "0x200", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "thread": { 00:04:15.053 "mask": "0x400", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "nvme_pcie": { 00:04:15.053 "mask": "0x800", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "iaa": { 00:04:15.053 "mask": "0x1000", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "nvme_tcp": { 00:04:15.053 "mask": "0x2000", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "bdev_nvme": { 00:04:15.053 "mask": "0x4000", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 }, 00:04:15.053 "sock": { 00:04:15.053 "mask": "0x8000", 00:04:15.053 "tpoint_mask": "0x0" 00:04:15.053 } 00:04:15.053 }' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:15.053 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.311 11:41:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.311 00:04:15.311 real 0m0.177s 00:04:15.311 user 0m0.161s 00:04:15.311 sys 0m0.010s 00:04:15.311 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.311 11:41:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.311 ************************************ 00:04:15.311 END TEST rpc_trace_cmd_test 00:04:15.311 ************************************ 00:04:15.311 11:41:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.311 11:41:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.311 11:41:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.311 11:41:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.312 11:41:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.312 11:41:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 ************************************ 00:04:15.312 START TEST rpc_daemon_integrity 00:04:15.312 ************************************ 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.312 { 00:04:15.312 "name": "Malloc2", 00:04:15.312 "aliases": [ 00:04:15.312 "8f7e3f06-06ee-4063-9a5e-9dc19059ad33" 00:04:15.312 ], 00:04:15.312 "product_name": "Malloc disk", 00:04:15.312 "block_size": 512, 00:04:15.312 "num_blocks": 16384, 00:04:15.312 "uuid": "8f7e3f06-06ee-4063-9a5e-9dc19059ad33", 00:04:15.312 "assigned_rate_limits": { 00:04:15.312 "rw_ios_per_sec": 0, 00:04:15.312 "rw_mbytes_per_sec": 0, 00:04:15.312 "r_mbytes_per_sec": 0, 00:04:15.312 "w_mbytes_per_sec": 0 00:04:15.312 }, 00:04:15.312 "claimed": false, 00:04:15.312 "zoned": false, 00:04:15.312 "supported_io_types": { 00:04:15.312 "read": true, 00:04:15.312 "write": true, 00:04:15.312 "unmap": true, 00:04:15.312 "write_zeroes": true, 00:04:15.312 "flush": true, 00:04:15.312 "reset": true, 00:04:15.312 "compare": false, 00:04:15.312 "compare_and_write": false, 00:04:15.312 "abort": true, 00:04:15.312 "nvme_admin": false, 00:04:15.312 "nvme_io": false 00:04:15.312 }, 00:04:15.312 "memory_domains": [ 00:04:15.312 { 00:04:15.312 "dma_device_id": "system", 00:04:15.312 "dma_device_type": 1 00:04:15.312 }, 00:04:15.312 { 00:04:15.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.312 "dma_device_type": 2 00:04:15.312 } 00:04:15.312 ], 00:04:15.312 "driver_specific": {} 00:04:15.312 } 00:04:15.312 ]' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 [2024-07-12 11:41:04.698240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.312 [2024-07-12 11:41:04.698280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.312 [2024-07-12 11:41:04.698305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x186f8b0 00:04:15.312 [2024-07-12 11:41:04.698318] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.312 [2024-07-12 11:41:04.699531] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.312 [2024-07-12 11:41:04.699555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.312 Passthru0 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.312 { 00:04:15.312 "name": "Malloc2", 00:04:15.312 "aliases": [ 00:04:15.312 "8f7e3f06-06ee-4063-9a5e-9dc19059ad33" 00:04:15.312 ], 00:04:15.312 "product_name": "Malloc disk", 00:04:15.312 "block_size": 512, 00:04:15.312 "num_blocks": 16384, 00:04:15.312 "uuid": "8f7e3f06-06ee-4063-9a5e-9dc19059ad33", 00:04:15.312 "assigned_rate_limits": { 00:04:15.312 "rw_ios_per_sec": 0, 00:04:15.312 "rw_mbytes_per_sec": 0, 00:04:15.312 "r_mbytes_per_sec": 0, 00:04:15.312 "w_mbytes_per_sec": 0 00:04:15.312 }, 00:04:15.312 "claimed": true, 00:04:15.312 "claim_type": "exclusive_write", 00:04:15.312 "zoned": false, 00:04:15.312 "supported_io_types": { 00:04:15.312 "read": true, 00:04:15.312 "write": true, 00:04:15.312 "unmap": true, 00:04:15.312 "write_zeroes": true, 00:04:15.312 "flush": true, 00:04:15.312 "reset": true, 00:04:15.312 "compare": false, 00:04:15.312 "compare_and_write": false, 00:04:15.312 "abort": true, 00:04:15.312 "nvme_admin": false, 00:04:15.312 "nvme_io": false 00:04:15.312 }, 00:04:15.312 "memory_domains": [ 00:04:15.312 { 00:04:15.312 "dma_device_id": "system", 00:04:15.312 "dma_device_type": 1 00:04:15.312 }, 00:04:15.312 { 00:04:15.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.312 "dma_device_type": 2 00:04:15.312 } 00:04:15.312 ], 00:04:15.312 "driver_specific": {} 00:04:15.312 }, 00:04:15.312 { 00:04:15.312 "name": "Passthru0", 00:04:15.312 "aliases": [ 00:04:15.312 "be9d7192-b1d8-5dbc-8f36-27b1823d0ebd" 00:04:15.312 ], 00:04:15.312 "product_name": "passthru", 00:04:15.312 "block_size": 512, 00:04:15.312 "num_blocks": 16384, 00:04:15.312 "uuid": "be9d7192-b1d8-5dbc-8f36-27b1823d0ebd", 00:04:15.312 "assigned_rate_limits": { 00:04:15.312 "rw_ios_per_sec": 0, 00:04:15.312 "rw_mbytes_per_sec": 0, 00:04:15.312 "r_mbytes_per_sec": 0, 00:04:15.312 "w_mbytes_per_sec": 0 00:04:15.312 }, 00:04:15.312 "claimed": false, 00:04:15.312 "zoned": false, 00:04:15.312 "supported_io_types": { 00:04:15.312 "read": true, 00:04:15.312 "write": true, 00:04:15.312 "unmap": true, 00:04:15.312 "write_zeroes": true, 00:04:15.312 "flush": true, 00:04:15.312 "reset": true, 00:04:15.312 "compare": false, 00:04:15.312 "compare_and_write": false, 00:04:15.312 "abort": true, 00:04:15.312 "nvme_admin": false, 00:04:15.312 "nvme_io": false 00:04:15.312 }, 00:04:15.312 "memory_domains": [ 00:04:15.312 { 00:04:15.312 "dma_device_id": "system", 00:04:15.312 "dma_device_type": 1 00:04:15.312 }, 00:04:15.312 { 00:04:15.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.312 "dma_device_type": 2 00:04:15.312 } 00:04:15.312 ], 00:04:15.312 "driver_specific": { 00:04:15.312 "passthru": { 00:04:15.312 "name": "Passthru0", 00:04:15.312 "base_bdev_name": "Malloc2" 00:04:15.312 } 00:04:15.312 } 00:04:15.312 } 00:04:15.312 ]' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.312 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.570 11:41:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.570 00:04:15.570 real 0m0.214s 00:04:15.570 user 0m0.133s 00:04:15.570 sys 0m0.021s 00:04:15.570 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.570 11:41:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.570 ************************************ 00:04:15.570 END TEST rpc_daemon_integrity 00:04:15.570 ************************************ 00:04:15.570 11:41:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.570 11:41:04 rpc -- rpc/rpc.sh@84 -- # killprocess 800421 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@949 -- # '[' -z 800421 ']' 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@953 -- # kill -0 800421 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@954 -- # uname 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 800421 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 800421' 00:04:15.570 killing process with pid 800421 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@968 -- # kill 800421 00:04:15.570 11:41:04 rpc -- common/autotest_common.sh@973 -- # wait 800421 00:04:15.827 00:04:15.827 real 0m1.878s 00:04:15.827 user 0m2.339s 00:04:15.827 sys 0m0.549s 00:04:15.827 11:41:05 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.827 11:41:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.827 ************************************ 00:04:15.828 END TEST rpc 00:04:15.828 ************************************ 00:04:15.828 11:41:05 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.828 11:41:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.828 11:41:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.828 11:41:05 -- common/autotest_common.sh@10 -- # set +x 00:04:16.084 ************************************ 00:04:16.085 START TEST skip_rpc 00:04:16.085 ************************************ 00:04:16.085 11:41:05 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.085 * Looking for test storage... 00:04:16.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.085 11:41:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.085 11:41:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.085 11:41:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.085 11:41:05 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:16.085 11:41:05 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:16.085 11:41:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.085 ************************************ 00:04:16.085 START TEST skip_rpc 00:04:16.085 ************************************ 00:04:16.085 11:41:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:16.085 11:41:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=800861 00:04:16.085 11:41:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.085 11:41:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.085 11:41:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.085 [2024-07-12 11:41:05.462772] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:16.085 [2024-07-12 11:41:05.462834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800861 ] 00:04:16.085 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.085 [2024-07-12 11:41:05.519946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.343 [2024-07-12 11:41:05.622373] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 800861 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 800861 ']' 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 800861 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 800861 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 800861' 00:04:21.616 killing process with pid 800861 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 800861 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 800861 00:04:21.616 00:04:21.616 real 0m5.456s 00:04:21.616 user 0m5.148s 00:04:21.616 sys 0m0.313s 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:21.616 11:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.616 ************************************ 00:04:21.616 END TEST skip_rpc 00:04:21.616 ************************************ 00:04:21.616 11:41:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.616 11:41:10 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.616 11:41:10 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.616 11:41:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.616 ************************************ 00:04:21.616 START TEST skip_rpc_with_json 00:04:21.616 ************************************ 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=801554 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 801554 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 801554 ']' 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:21.616 11:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.616 [2024-07-12 11:41:10.970143] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:21.616 [2024-07-12 11:41:10.970248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801554 ] 00:04:21.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.616 [2024-07-12 11:41:11.026747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.876 [2024-07-12 11:41:11.127543] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.136 [2024-07-12 11:41:11.379213] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:22.136 request: 00:04:22.136 { 00:04:22.136 "trtype": "tcp", 00:04:22.136 "method": "nvmf_get_transports", 00:04:22.136 "req_id": 1 00:04:22.136 } 00:04:22.136 Got JSON-RPC error response 00:04:22.136 response: 00:04:22.136 { 00:04:22.136 "code": -19, 00:04:22.136 "message": "No such device" 00:04:22.136 } 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.136 [2024-07-12 11:41:11.387310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:22.136 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.136 { 00:04:22.136 "subsystems": [ 00:04:22.136 { 00:04:22.136 "subsystem": "vfio_user_target", 00:04:22.136 "config": null 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "keyring", 00:04:22.136 "config": [] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "iobuf", 00:04:22.136 "config": [ 00:04:22.136 { 00:04:22.136 "method": "iobuf_set_options", 00:04:22.136 "params": { 00:04:22.136 "small_pool_count": 8192, 00:04:22.136 "large_pool_count": 1024, 00:04:22.136 "small_bufsize": 8192, 00:04:22.136 "large_bufsize": 135168 00:04:22.136 } 00:04:22.136 } 00:04:22.136 ] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "sock", 00:04:22.136 "config": [ 00:04:22.136 { 00:04:22.136 "method": "sock_set_default_impl", 00:04:22.136 "params": { 00:04:22.136 "impl_name": "posix" 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "sock_impl_set_options", 00:04:22.136 "params": { 00:04:22.136 "impl_name": "ssl", 00:04:22.136 "recv_buf_size": 4096, 00:04:22.136 "send_buf_size": 4096, 00:04:22.136 "enable_recv_pipe": true, 00:04:22.136 "enable_quickack": false, 00:04:22.136 "enable_placement_id": 0, 00:04:22.136 "enable_zerocopy_send_server": true, 00:04:22.136 "enable_zerocopy_send_client": false, 00:04:22.136 "zerocopy_threshold": 0, 00:04:22.136 "tls_version": 0, 00:04:22.136 "enable_ktls": false 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "sock_impl_set_options", 00:04:22.136 "params": { 00:04:22.136 "impl_name": "posix", 00:04:22.136 "recv_buf_size": 2097152, 00:04:22.136 "send_buf_size": 2097152, 00:04:22.136 "enable_recv_pipe": true, 00:04:22.136 "enable_quickack": false, 00:04:22.136 "enable_placement_id": 0, 00:04:22.136 "enable_zerocopy_send_server": true, 00:04:22.136 "enable_zerocopy_send_client": false, 00:04:22.136 "zerocopy_threshold": 0, 00:04:22.136 "tls_version": 0, 00:04:22.136 "enable_ktls": false 00:04:22.136 } 00:04:22.136 } 00:04:22.136 ] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "vmd", 00:04:22.136 "config": [] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "accel", 00:04:22.136 "config": [ 00:04:22.136 { 00:04:22.136 "method": "accel_set_options", 00:04:22.136 "params": { 00:04:22.136 "small_cache_size": 128, 00:04:22.136 "large_cache_size": 16, 00:04:22.136 "task_count": 2048, 00:04:22.136 "sequence_count": 2048, 00:04:22.136 "buf_count": 2048 00:04:22.136 } 00:04:22.136 } 00:04:22.136 ] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "bdev", 00:04:22.136 "config": [ 00:04:22.136 { 00:04:22.136 "method": "bdev_set_options", 00:04:22.136 "params": { 00:04:22.136 "bdev_io_pool_size": 65535, 00:04:22.136 "bdev_io_cache_size": 256, 00:04:22.136 "bdev_auto_examine": true, 00:04:22.136 "iobuf_small_cache_size": 128, 00:04:22.136 "iobuf_large_cache_size": 16 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "bdev_raid_set_options", 00:04:22.136 "params": { 00:04:22.136 "process_window_size_kb": 1024 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "bdev_iscsi_set_options", 00:04:22.136 "params": { 00:04:22.136 "timeout_sec": 30 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "bdev_nvme_set_options", 00:04:22.136 "params": { 00:04:22.136 "action_on_timeout": "none", 00:04:22.136 "timeout_us": 0, 00:04:22.136 "timeout_admin_us": 0, 00:04:22.136 "keep_alive_timeout_ms": 10000, 00:04:22.136 "arbitration_burst": 0, 00:04:22.136 "low_priority_weight": 0, 00:04:22.136 "medium_priority_weight": 0, 00:04:22.136 "high_priority_weight": 0, 00:04:22.136 "nvme_adminq_poll_period_us": 10000, 00:04:22.136 "nvme_ioq_poll_period_us": 0, 00:04:22.136 "io_queue_requests": 0, 00:04:22.136 "delay_cmd_submit": true, 00:04:22.136 "transport_retry_count": 4, 00:04:22.136 "bdev_retry_count": 3, 00:04:22.136 "transport_ack_timeout": 0, 00:04:22.136 "ctrlr_loss_timeout_sec": 0, 00:04:22.136 "reconnect_delay_sec": 0, 00:04:22.136 "fast_io_fail_timeout_sec": 0, 00:04:22.136 "disable_auto_failback": false, 00:04:22.136 "generate_uuids": false, 00:04:22.136 "transport_tos": 0, 00:04:22.136 "nvme_error_stat": false, 00:04:22.136 "rdma_srq_size": 0, 00:04:22.136 "io_path_stat": false, 00:04:22.136 "allow_accel_sequence": false, 00:04:22.136 "rdma_max_cq_size": 0, 00:04:22.136 "rdma_cm_event_timeout_ms": 0, 00:04:22.136 "dhchap_digests": [ 00:04:22.136 "sha256", 00:04:22.136 "sha384", 00:04:22.136 "sha512" 00:04:22.136 ], 00:04:22.136 "dhchap_dhgroups": [ 00:04:22.136 "null", 00:04:22.136 "ffdhe2048", 00:04:22.136 "ffdhe3072", 00:04:22.136 "ffdhe4096", 00:04:22.136 "ffdhe6144", 00:04:22.136 "ffdhe8192" 00:04:22.136 ] 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "bdev_nvme_set_hotplug", 00:04:22.136 "params": { 00:04:22.136 "period_us": 100000, 00:04:22.136 "enable": false 00:04:22.136 } 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "method": "bdev_wait_for_examine" 00:04:22.136 } 00:04:22.136 ] 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "scsi", 00:04:22.136 "config": null 00:04:22.136 }, 00:04:22.136 { 00:04:22.136 "subsystem": "scheduler", 00:04:22.136 "config": [ 00:04:22.136 { 00:04:22.136 "method": "framework_set_scheduler", 00:04:22.136 "params": { 00:04:22.136 "name": "static" 00:04:22.136 } 00:04:22.136 } 00:04:22.136 ] 00:04:22.136 }, 00:04:22.136 { 00:04:22.137 "subsystem": "vhost_scsi", 00:04:22.137 "config": [] 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "subsystem": "vhost_blk", 00:04:22.137 "config": [] 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "subsystem": "ublk", 00:04:22.137 "config": [] 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "subsystem": "nbd", 00:04:22.137 "config": [] 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "subsystem": "nvmf", 00:04:22.137 "config": [ 00:04:22.137 { 00:04:22.137 "method": "nvmf_set_config", 00:04:22.137 "params": { 00:04:22.137 "discovery_filter": "match_any", 00:04:22.137 "admin_cmd_passthru": { 00:04:22.137 "identify_ctrlr": false 00:04:22.137 } 00:04:22.137 } 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "method": "nvmf_set_max_subsystems", 00:04:22.137 "params": { 00:04:22.137 "max_subsystems": 1024 00:04:22.137 } 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "method": "nvmf_set_crdt", 00:04:22.137 "params": { 00:04:22.137 "crdt1": 0, 00:04:22.137 "crdt2": 0, 00:04:22.137 "crdt3": 0 00:04:22.137 } 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "method": "nvmf_create_transport", 00:04:22.137 "params": { 00:04:22.137 "trtype": "TCP", 00:04:22.137 "max_queue_depth": 128, 00:04:22.137 "max_io_qpairs_per_ctrlr": 127, 00:04:22.137 "in_capsule_data_size": 4096, 00:04:22.137 "max_io_size": 131072, 00:04:22.137 "io_unit_size": 131072, 00:04:22.137 "max_aq_depth": 128, 00:04:22.137 "num_shared_buffers": 511, 00:04:22.137 "buf_cache_size": 4294967295, 00:04:22.137 "dif_insert_or_strip": false, 00:04:22.137 "zcopy": false, 00:04:22.137 "c2h_success": true, 00:04:22.137 "sock_priority": 0, 00:04:22.137 "abort_timeout_sec": 1, 00:04:22.137 "ack_timeout": 0, 00:04:22.137 "data_wr_pool_size": 0 00:04:22.137 } 00:04:22.137 } 00:04:22.137 ] 00:04:22.137 }, 00:04:22.137 { 00:04:22.137 "subsystem": "iscsi", 00:04:22.137 "config": [ 00:04:22.137 { 00:04:22.137 "method": "iscsi_set_options", 00:04:22.137 "params": { 00:04:22.137 "node_base": "iqn.2016-06.io.spdk", 00:04:22.137 "max_sessions": 128, 00:04:22.137 "max_connections_per_session": 2, 00:04:22.137 "max_queue_depth": 64, 00:04:22.137 "default_time2wait": 2, 00:04:22.137 "default_time2retain": 20, 00:04:22.137 "first_burst_length": 8192, 00:04:22.137 "immediate_data": true, 00:04:22.137 "allow_duplicated_isid": false, 00:04:22.137 "error_recovery_level": 0, 00:04:22.137 "nop_timeout": 60, 00:04:22.137 "nop_in_interval": 30, 00:04:22.137 "disable_chap": false, 00:04:22.137 "require_chap": false, 00:04:22.137 "mutual_chap": false, 00:04:22.137 "chap_group": 0, 00:04:22.137 "max_large_datain_per_connection": 64, 00:04:22.137 "max_r2t_per_connection": 4, 00:04:22.137 "pdu_pool_size": 36864, 00:04:22.137 "immediate_data_pool_size": 16384, 00:04:22.137 "data_out_pool_size": 2048 00:04:22.137 } 00:04:22.137 } 00:04:22.137 ] 00:04:22.137 } 00:04:22.137 ] 00:04:22.137 } 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 801554 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 801554 ']' 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 801554 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 801554 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 801554' 00:04:22.137 killing process with pid 801554 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 801554 00:04:22.137 11:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 801554 00:04:22.706 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=801694 00:04:22.706 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.706 11:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 801694 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 801694 ']' 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 801694 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:27.984 11:41:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 801694 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 801694' 00:04:27.984 killing process with pid 801694 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 801694 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 801694 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.984 00:04:27.984 real 0m6.543s 00:04:27.984 user 0m6.183s 00:04:27.984 sys 0m0.630s 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.984 11:41:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.984 ************************************ 00:04:27.984 END TEST skip_rpc_with_json 00:04:27.984 ************************************ 00:04:28.245 11:41:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.245 ************************************ 00:04:28.245 START TEST skip_rpc_with_delay 00:04:28.245 ************************************ 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.245 [2024-07-12 11:41:17.569674] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.245 [2024-07-12 11:41:17.569780] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:28.245 00:04:28.245 real 0m0.071s 00:04:28.245 user 0m0.039s 00:04:28.245 sys 0m0.032s 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:28.245 11:41:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.245 ************************************ 00:04:28.245 END TEST skip_rpc_with_delay 00:04:28.245 ************************************ 00:04:28.245 11:41:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.245 11:41:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.245 11:41:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.245 11:41:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.245 ************************************ 00:04:28.245 START TEST exit_on_failed_rpc_init 00:04:28.245 ************************************ 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=802411 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 802411 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 802411 ']' 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:28.245 11:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.245 [2024-07-12 11:41:17.688924] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:28.245 [2024-07-12 11:41:17.689005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802411 ] 00:04:28.245 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.505 [2024-07-12 11:41:17.746162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.505 [2024-07-12 11:41:17.856563] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.764 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.764 [2024-07-12 11:41:18.151262] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:28.764 [2024-07-12 11:41:18.151339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802418 ] 00:04:28.764 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.764 [2024-07-12 11:41:18.207603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.023 [2024-07-12 11:41:18.317121] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.023 [2024-07-12 11:41:18.317251] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.023 [2024-07-12 11:41:18.317269] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.023 [2024-07-12 11:41:18.317281] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 802411 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 802411 ']' 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 802411 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 802411 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 802411' 00:04:29.023 killing process with pid 802411 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 802411 00:04:29.023 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 802411 00:04:29.587 00:04:29.587 real 0m1.258s 00:04:29.587 user 0m1.429s 00:04:29.587 sys 0m0.425s 00:04:29.587 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.587 11:41:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.587 ************************************ 00:04:29.587 END TEST exit_on_failed_rpc_init 00:04:29.587 ************************************ 00:04:29.587 11:41:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.587 00:04:29.587 real 0m13.579s 00:04:29.588 user 0m12.898s 00:04:29.588 sys 0m1.567s 00:04:29.588 11:41:18 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.588 11:41:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.588 ************************************ 00:04:29.588 END TEST skip_rpc 00:04:29.588 ************************************ 00:04:29.588 11:41:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.588 11:41:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.588 11:41:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.588 11:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:29.588 ************************************ 00:04:29.588 START TEST rpc_client 00:04:29.588 ************************************ 00:04:29.588 11:41:18 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.588 * Looking for test storage... 00:04:29.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:29.588 11:41:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:29.588 OK 00:04:29.588 11:41:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.588 00:04:29.588 real 0m0.071s 00:04:29.588 user 0m0.040s 00:04:29.588 sys 0m0.036s 00:04:29.588 11:41:19 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.588 11:41:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.588 ************************************ 00:04:29.588 END TEST rpc_client 00:04:29.588 ************************************ 00:04:29.588 11:41:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:29.588 11:41:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.588 11:41:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.588 11:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:29.846 ************************************ 00:04:29.846 START TEST json_config 00:04:29.846 ************************************ 00:04:29.846 11:41:19 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:29.846 11:41:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:29.846 11:41:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.846 11:41:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.846 11:41:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.846 11:41:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.846 11:41:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.846 11:41:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.846 11:41:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:29.846 11:41:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@47 -- # : 0 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:29.846 11:41:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:29.846 11:41:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:29.846 11:41:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:29.846 11:41:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:29.846 11:41:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:29.847 INFO: JSON configuration test init 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.847 11:41:19 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:29.847 11:41:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:29.847 11:41:19 json_config -- json_config/common.sh@10 -- # shift 00:04:29.847 11:41:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:29.847 11:41:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:29.847 11:41:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:29.847 11:41:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.847 11:41:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.847 11:41:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=802662 00:04:29.847 11:41:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:29.847 11:41:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:29.847 Waiting for target to run... 00:04:29.847 11:41:19 json_config -- json_config/common.sh@25 -- # waitforlisten 802662 /var/tmp/spdk_tgt.sock 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@830 -- # '[' -z 802662 ']' 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:29.847 11:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.847 [2024-07-12 11:41:19.200503] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:29.847 [2024-07-12 11:41:19.200624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802662 ] 00:04:29.847 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.412 [2024-07-12 11:41:19.687064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.412 [2024-07-12 11:41:19.780119] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@863 -- # return 0 00:04:30.670 11:41:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.670 00:04:30.670 11:41:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:30.670 11:41:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.670 11:41:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:30.670 11:41:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:30.670 11:41:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.929 11:41:20 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:30.929 11:41:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:30.929 11:41:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:34.214 11:41:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:34.214 11:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:34.214 11:41:23 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.214 11:41:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.493 MallocForNvmf0 00:04:34.493 11:41:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.493 11:41:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.770 MallocForNvmf1 00:04:34.770 11:41:24 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:34.770 11:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.027 [2024-07-12 11:41:24.301513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.027 11:41:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.027 11:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.284 11:41:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.284 11:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.543 11:41:24 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.543 11:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.803 11:41:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.803 11:41:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.803 [2024-07-12 11:41:25.276722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.061 11:41:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:36.061 11:41:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:36.061 11:41:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.061 11:41:25 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:36.061 11:41:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:36.061 11:41:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.061 11:41:25 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:36.061 11:41:25 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.061 11:41:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.319 MallocBdevForConfigChangeCheck 00:04:36.319 11:41:25 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:36.319 11:41:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:36.319 11:41:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.319 11:41:25 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:36.319 11:41:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.578 11:41:25 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:36.578 INFO: shutting down applications... 00:04:36.578 11:41:25 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:36.578 11:41:25 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:36.578 11:41:25 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:36.578 11:41:25 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:38.486 Calling clear_iscsi_subsystem 00:04:38.486 Calling clear_nvmf_subsystem 00:04:38.486 Calling clear_nbd_subsystem 00:04:38.486 Calling clear_ublk_subsystem 00:04:38.486 Calling clear_vhost_blk_subsystem 00:04:38.486 Calling clear_vhost_scsi_subsystem 00:04:38.486 Calling clear_bdev_subsystem 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:38.486 11:41:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:38.745 11:41:28 json_config -- json_config/json_config.sh@345 -- # break 00:04:38.745 11:41:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:38.745 11:41:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:38.745 11:41:28 json_config -- json_config/common.sh@31 -- # local app=target 00:04:38.745 11:41:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.745 11:41:28 json_config -- json_config/common.sh@35 -- # [[ -n 802662 ]] 00:04:38.745 11:41:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 802662 00:04:38.745 11:41:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.745 11:41:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.745 11:41:28 json_config -- json_config/common.sh@41 -- # kill -0 802662 00:04:38.745 11:41:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.312 11:41:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.312 11:41:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.312 11:41:28 json_config -- json_config/common.sh@41 -- # kill -0 802662 00:04:39.312 11:41:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.312 11:41:28 json_config -- json_config/common.sh@43 -- # break 00:04:39.312 11:41:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.312 11:41:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.312 SPDK target shutdown done 00:04:39.312 11:41:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:39.312 INFO: relaunching applications... 00:04:39.312 11:41:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.312 11:41:28 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.312 11:41:28 json_config -- json_config/common.sh@10 -- # shift 00:04:39.312 11:41:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.312 11:41:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.312 11:41:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.312 11:41:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.312 11:41:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.312 11:41:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=803971 00:04:39.312 11:41:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.312 11:41:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.312 Waiting for target to run... 00:04:39.312 11:41:28 json_config -- json_config/common.sh@25 -- # waitforlisten 803971 /var/tmp/spdk_tgt.sock 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@830 -- # '[' -z 803971 ']' 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:39.312 11:41:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.312 [2024-07-12 11:41:28.576175] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:39.312 [2024-07-12 11:41:28.576284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803971 ] 00:04:39.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.878 [2024-07-12 11:41:29.096551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.878 [2024-07-12 11:41:29.188915] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.202 [2024-07-12 11:41:32.219134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.202 [2024-07-12 11:41:32.251552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.769 11:41:32 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:43.769 11:41:32 json_config -- common/autotest_common.sh@863 -- # return 0 00:04:43.769 11:41:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.769 00:04:43.769 11:41:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:43.769 11:41:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.769 INFO: Checking if target configuration is the same... 00:04:43.769 11:41:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.769 11:41:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:43.769 11:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.769 + '[' 2 -ne 2 ']' 00:04:43.769 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.769 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.769 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.769 +++ basename /dev/fd/62 00:04:43.769 ++ mktemp /tmp/62.XXX 00:04:43.769 + tmp_file_1=/tmp/62.AuZ 00:04:43.769 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.769 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.769 + tmp_file_2=/tmp/spdk_tgt_config.json.WXm 00:04:43.769 + ret=0 00:04:43.769 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.025 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.025 + diff -u /tmp/62.AuZ /tmp/spdk_tgt_config.json.WXm 00:04:44.025 + echo 'INFO: JSON config files are the same' 00:04:44.025 INFO: JSON config files are the same 00:04:44.025 + rm /tmp/62.AuZ /tmp/spdk_tgt_config.json.WXm 00:04:44.025 + exit 0 00:04:44.025 11:41:33 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:44.025 11:41:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.025 INFO: changing configuration and checking if this can be detected... 00:04:44.025 11:41:33 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.025 11:41:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.282 11:41:33 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.282 11:41:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:44.282 11:41:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.282 + '[' 2 -ne 2 ']' 00:04:44.282 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.282 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.282 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.282 +++ basename /dev/fd/62 00:04:44.282 ++ mktemp /tmp/62.XXX 00:04:44.282 + tmp_file_1=/tmp/62.59b 00:04:44.282 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.282 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.282 + tmp_file_2=/tmp/spdk_tgt_config.json.wQc 00:04:44.282 + ret=0 00:04:44.282 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.540 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.799 + diff -u /tmp/62.59b /tmp/spdk_tgt_config.json.wQc 00:04:44.799 + ret=1 00:04:44.799 + echo '=== Start of file: /tmp/62.59b ===' 00:04:44.799 + cat /tmp/62.59b 00:04:44.799 + echo '=== End of file: /tmp/62.59b ===' 00:04:44.799 + echo '' 00:04:44.799 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wQc ===' 00:04:44.799 + cat /tmp/spdk_tgt_config.json.wQc 00:04:44.799 + echo '=== End of file: /tmp/spdk_tgt_config.json.wQc ===' 00:04:44.799 + echo '' 00:04:44.799 + rm /tmp/62.59b /tmp/spdk_tgt_config.json.wQc 00:04:44.799 + exit 1 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:44.799 INFO: configuration change detected. 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 803971 ]] 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:44.799 11:41:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:44.799 11:41:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.800 11:41:34 json_config -- json_config/json_config.sh@323 -- # killprocess 803971 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@949 -- # '[' -z 803971 ']' 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@953 -- # kill -0 803971 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@954 -- # uname 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 803971 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 803971' 00:04:44.800 killing process with pid 803971 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@968 -- # kill 803971 00:04:44.800 11:41:34 json_config -- common/autotest_common.sh@973 -- # wait 803971 00:04:46.709 11:41:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.709 11:41:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:46.709 11:41:35 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:46.709 11:41:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 11:41:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:46.709 11:41:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:46.709 INFO: Success 00:04:46.709 00:04:46.709 real 0m16.718s 00:04:46.709 user 0m18.442s 00:04:46.709 sys 0m2.252s 00:04:46.709 11:41:35 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:46.709 11:41:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 ************************************ 00:04:46.709 END TEST json_config 00:04:46.709 ************************************ 00:04:46.709 11:41:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.709 11:41:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.709 11:41:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.709 11:41:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 ************************************ 00:04:46.709 START TEST json_config_extra_key 00:04:46.709 ************************************ 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.709 11:41:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.709 11:41:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.709 11:41:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.709 11:41:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.709 11:41:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.709 11:41:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.709 11:41:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.709 11:41:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:46.709 11:41:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.709 INFO: launching applications... 00:04:46.709 11:41:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=804912 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.709 Waiting for target to run... 00:04:46.709 11:41:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 804912 /var/tmp/spdk_tgt.sock 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 804912 ']' 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:46.709 11:41:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 [2024-07-12 11:41:35.962084] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:46.709 [2024-07-12 11:41:35.962194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804912 ] 00:04:46.709 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.969 [2024-07-12 11:41:36.292559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.969 [2024-07-12 11:41:36.371331] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.537 11:41:36 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:47.537 11:41:36 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.537 00:04:47.537 11:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.537 INFO: shutting down applications... 00:04:47.537 11:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 804912 ]] 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 804912 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 804912 00:04:47.537 11:41:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 804912 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.104 11:41:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.104 SPDK target shutdown done 00:04:48.104 11:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.104 Success 00:04:48.104 00:04:48.104 real 0m1.539s 00:04:48.104 user 0m1.502s 00:04:48.104 sys 0m0.447s 00:04:48.104 11:41:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:48.104 11:41:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.104 ************************************ 00:04:48.104 END TEST json_config_extra_key 00:04:48.104 ************************************ 00:04:48.104 11:41:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.104 11:41:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:48.104 11:41:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:48.104 11:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.104 ************************************ 00:04:48.104 START TEST alias_rpc 00:04:48.104 ************************************ 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.104 * Looking for test storage... 00:04:48.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:48.104 11:41:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.104 11:41:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=805215 00:04:48.104 11:41:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.104 11:41:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 805215 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 805215 ']' 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:48.104 11:41:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.104 [2024-07-12 11:41:37.547760] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:48.104 [2024-07-12 11:41:37.547863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805215 ] 00:04:48.104 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.362 [2024-07-12 11:41:37.605386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.362 [2024-07-12 11:41:37.711119] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.621 11:41:37 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:48.621 11:41:37 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:04:48.621 11:41:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:48.878 11:41:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 805215 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 805215 ']' 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 805215 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 805215 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 805215' 00:04:48.878 killing process with pid 805215 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@968 -- # kill 805215 00:04:48.878 11:41:38 alias_rpc -- common/autotest_common.sh@973 -- # wait 805215 00:04:49.444 00:04:49.444 real 0m1.240s 00:04:49.444 user 0m1.307s 00:04:49.444 sys 0m0.422s 00:04:49.444 11:41:38 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:49.445 11:41:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.445 ************************************ 00:04:49.445 END TEST alias_rpc 00:04:49.445 ************************************ 00:04:49.445 11:41:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:49.445 11:41:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.445 11:41:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:49.445 11:41:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:49.445 11:41:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.445 ************************************ 00:04:49.445 START TEST spdkcli_tcp 00:04:49.445 ************************************ 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.445 * Looking for test storage... 00:04:49.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=805401 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.445 11:41:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 805401 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 805401 ']' 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:49.445 11:41:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.445 [2024-07-12 11:41:38.842533] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:49.445 [2024-07-12 11:41:38.842611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805401 ] 00:04:49.445 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.445 [2024-07-12 11:41:38.900402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.705 [2024-07-12 11:41:39.007631] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.705 [2024-07-12 11:41:39.007635] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.965 11:41:39 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:49.965 11:41:39 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:04:49.965 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=805413 00:04:49.965 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.965 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.224 [ 00:04:50.224 "bdev_malloc_delete", 00:04:50.224 "bdev_malloc_create", 00:04:50.224 "bdev_null_resize", 00:04:50.224 "bdev_null_delete", 00:04:50.224 "bdev_null_create", 00:04:50.224 "bdev_nvme_cuse_unregister", 00:04:50.224 "bdev_nvme_cuse_register", 00:04:50.224 "bdev_opal_new_user", 00:04:50.224 "bdev_opal_set_lock_state", 00:04:50.224 "bdev_opal_delete", 00:04:50.224 "bdev_opal_get_info", 00:04:50.224 "bdev_opal_create", 00:04:50.224 "bdev_nvme_opal_revert", 00:04:50.224 "bdev_nvme_opal_init", 00:04:50.224 "bdev_nvme_send_cmd", 00:04:50.224 "bdev_nvme_get_path_iostat", 00:04:50.224 "bdev_nvme_get_mdns_discovery_info", 00:04:50.224 "bdev_nvme_stop_mdns_discovery", 00:04:50.224 "bdev_nvme_start_mdns_discovery", 00:04:50.224 "bdev_nvme_set_multipath_policy", 00:04:50.224 "bdev_nvme_set_preferred_path", 00:04:50.224 "bdev_nvme_get_io_paths", 00:04:50.224 "bdev_nvme_remove_error_injection", 00:04:50.224 "bdev_nvme_add_error_injection", 00:04:50.224 "bdev_nvme_get_discovery_info", 00:04:50.224 "bdev_nvme_stop_discovery", 00:04:50.224 "bdev_nvme_start_discovery", 00:04:50.224 "bdev_nvme_get_controller_health_info", 00:04:50.224 "bdev_nvme_disable_controller", 00:04:50.224 "bdev_nvme_enable_controller", 00:04:50.224 "bdev_nvme_reset_controller", 00:04:50.224 "bdev_nvme_get_transport_statistics", 00:04:50.224 "bdev_nvme_apply_firmware", 00:04:50.224 "bdev_nvme_detach_controller", 00:04:50.224 "bdev_nvme_get_controllers", 00:04:50.224 "bdev_nvme_attach_controller", 00:04:50.224 "bdev_nvme_set_hotplug", 00:04:50.224 "bdev_nvme_set_options", 00:04:50.224 "bdev_passthru_delete", 00:04:50.224 "bdev_passthru_create", 00:04:50.224 "bdev_lvol_set_parent_bdev", 00:04:50.224 "bdev_lvol_set_parent", 00:04:50.224 "bdev_lvol_check_shallow_copy", 00:04:50.224 "bdev_lvol_start_shallow_copy", 00:04:50.224 "bdev_lvol_grow_lvstore", 00:04:50.224 "bdev_lvol_get_lvols", 00:04:50.224 "bdev_lvol_get_lvstores", 00:04:50.224 "bdev_lvol_delete", 00:04:50.224 "bdev_lvol_set_read_only", 00:04:50.224 "bdev_lvol_resize", 00:04:50.224 "bdev_lvol_decouple_parent", 00:04:50.224 "bdev_lvol_inflate", 00:04:50.224 "bdev_lvol_rename", 00:04:50.224 "bdev_lvol_clone_bdev", 00:04:50.224 "bdev_lvol_clone", 00:04:50.224 "bdev_lvol_snapshot", 00:04:50.224 "bdev_lvol_create", 00:04:50.224 "bdev_lvol_delete_lvstore", 00:04:50.224 "bdev_lvol_rename_lvstore", 00:04:50.224 "bdev_lvol_create_lvstore", 00:04:50.224 "bdev_raid_set_options", 00:04:50.224 "bdev_raid_remove_base_bdev", 00:04:50.224 "bdev_raid_add_base_bdev", 00:04:50.224 "bdev_raid_delete", 00:04:50.224 "bdev_raid_create", 00:04:50.224 "bdev_raid_get_bdevs", 00:04:50.224 "bdev_error_inject_error", 00:04:50.224 "bdev_error_delete", 00:04:50.224 "bdev_error_create", 00:04:50.224 "bdev_split_delete", 00:04:50.224 "bdev_split_create", 00:04:50.224 "bdev_delay_delete", 00:04:50.224 "bdev_delay_create", 00:04:50.224 "bdev_delay_update_latency", 00:04:50.224 "bdev_zone_block_delete", 00:04:50.224 "bdev_zone_block_create", 00:04:50.224 "blobfs_create", 00:04:50.224 "blobfs_detect", 00:04:50.224 "blobfs_set_cache_size", 00:04:50.224 "bdev_aio_delete", 00:04:50.224 "bdev_aio_rescan", 00:04:50.224 "bdev_aio_create", 00:04:50.224 "bdev_ftl_set_property", 00:04:50.224 "bdev_ftl_get_properties", 00:04:50.224 "bdev_ftl_get_stats", 00:04:50.224 "bdev_ftl_unmap", 00:04:50.224 "bdev_ftl_unload", 00:04:50.224 "bdev_ftl_delete", 00:04:50.224 "bdev_ftl_load", 00:04:50.224 "bdev_ftl_create", 00:04:50.224 "bdev_virtio_attach_controller", 00:04:50.224 "bdev_virtio_scsi_get_devices", 00:04:50.224 "bdev_virtio_detach_controller", 00:04:50.224 "bdev_virtio_blk_set_hotplug", 00:04:50.224 "bdev_iscsi_delete", 00:04:50.224 "bdev_iscsi_create", 00:04:50.224 "bdev_iscsi_set_options", 00:04:50.224 "accel_error_inject_error", 00:04:50.224 "ioat_scan_accel_module", 00:04:50.224 "dsa_scan_accel_module", 00:04:50.224 "iaa_scan_accel_module", 00:04:50.224 "vfu_virtio_create_scsi_endpoint", 00:04:50.224 "vfu_virtio_scsi_remove_target", 00:04:50.224 "vfu_virtio_scsi_add_target", 00:04:50.224 "vfu_virtio_create_blk_endpoint", 00:04:50.224 "vfu_virtio_delete_endpoint", 00:04:50.224 "keyring_file_remove_key", 00:04:50.224 "keyring_file_add_key", 00:04:50.224 "keyring_linux_set_options", 00:04:50.224 "iscsi_get_histogram", 00:04:50.224 "iscsi_enable_histogram", 00:04:50.224 "iscsi_set_options", 00:04:50.224 "iscsi_get_auth_groups", 00:04:50.224 "iscsi_auth_group_remove_secret", 00:04:50.224 "iscsi_auth_group_add_secret", 00:04:50.224 "iscsi_delete_auth_group", 00:04:50.224 "iscsi_create_auth_group", 00:04:50.224 "iscsi_set_discovery_auth", 00:04:50.224 "iscsi_get_options", 00:04:50.224 "iscsi_target_node_request_logout", 00:04:50.224 "iscsi_target_node_set_redirect", 00:04:50.224 "iscsi_target_node_set_auth", 00:04:50.224 "iscsi_target_node_add_lun", 00:04:50.224 "iscsi_get_stats", 00:04:50.224 "iscsi_get_connections", 00:04:50.224 "iscsi_portal_group_set_auth", 00:04:50.224 "iscsi_start_portal_group", 00:04:50.224 "iscsi_delete_portal_group", 00:04:50.224 "iscsi_create_portal_group", 00:04:50.224 "iscsi_get_portal_groups", 00:04:50.224 "iscsi_delete_target_node", 00:04:50.224 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.224 "iscsi_target_node_add_pg_ig_maps", 00:04:50.224 "iscsi_create_target_node", 00:04:50.224 "iscsi_get_target_nodes", 00:04:50.224 "iscsi_delete_initiator_group", 00:04:50.224 "iscsi_initiator_group_remove_initiators", 00:04:50.224 "iscsi_initiator_group_add_initiators", 00:04:50.224 "iscsi_create_initiator_group", 00:04:50.224 "iscsi_get_initiator_groups", 00:04:50.224 "nvmf_set_crdt", 00:04:50.224 "nvmf_set_config", 00:04:50.224 "nvmf_set_max_subsystems", 00:04:50.224 "nvmf_stop_mdns_prr", 00:04:50.224 "nvmf_publish_mdns_prr", 00:04:50.224 "nvmf_subsystem_get_listeners", 00:04:50.224 "nvmf_subsystem_get_qpairs", 00:04:50.224 "nvmf_subsystem_get_controllers", 00:04:50.225 "nvmf_get_stats", 00:04:50.225 "nvmf_get_transports", 00:04:50.225 "nvmf_create_transport", 00:04:50.225 "nvmf_get_targets", 00:04:50.225 "nvmf_delete_target", 00:04:50.225 "nvmf_create_target", 00:04:50.225 "nvmf_subsystem_allow_any_host", 00:04:50.225 "nvmf_subsystem_remove_host", 00:04:50.225 "nvmf_subsystem_add_host", 00:04:50.225 "nvmf_ns_remove_host", 00:04:50.225 "nvmf_ns_add_host", 00:04:50.225 "nvmf_subsystem_remove_ns", 00:04:50.225 "nvmf_subsystem_add_ns", 00:04:50.225 "nvmf_subsystem_listener_set_ana_state", 00:04:50.225 "nvmf_discovery_get_referrals", 00:04:50.225 "nvmf_discovery_remove_referral", 00:04:50.225 "nvmf_discovery_add_referral", 00:04:50.225 "nvmf_subsystem_remove_listener", 00:04:50.225 "nvmf_subsystem_add_listener", 00:04:50.225 "nvmf_delete_subsystem", 00:04:50.225 "nvmf_create_subsystem", 00:04:50.225 "nvmf_get_subsystems", 00:04:50.225 "env_dpdk_get_mem_stats", 00:04:50.225 "nbd_get_disks", 00:04:50.225 "nbd_stop_disk", 00:04:50.225 "nbd_start_disk", 00:04:50.225 "ublk_recover_disk", 00:04:50.225 "ublk_get_disks", 00:04:50.225 "ublk_stop_disk", 00:04:50.225 "ublk_start_disk", 00:04:50.225 "ublk_use_fixed_files", 00:04:50.225 "ublk_destroy_target", 00:04:50.225 "ublk_create_target", 00:04:50.225 "virtio_blk_create_transport", 00:04:50.225 "virtio_blk_get_transports", 00:04:50.225 "vhost_controller_set_coalescing", 00:04:50.225 "vhost_get_controllers", 00:04:50.225 "vhost_delete_controller", 00:04:50.225 "vhost_create_blk_controller", 00:04:50.225 "vhost_scsi_controller_remove_target", 00:04:50.225 "vhost_scsi_controller_add_target", 00:04:50.225 "vhost_start_scsi_controller", 00:04:50.225 "vhost_create_scsi_controller", 00:04:50.225 "thread_set_cpumask", 00:04:50.225 "framework_get_scheduler", 00:04:50.225 "framework_set_scheduler", 00:04:50.225 "framework_get_reactors", 00:04:50.225 "thread_get_io_channels", 00:04:50.225 "thread_get_pollers", 00:04:50.225 "thread_get_stats", 00:04:50.225 "framework_monitor_context_switch", 00:04:50.225 "spdk_kill_instance", 00:04:50.225 "log_enable_timestamps", 00:04:50.225 "log_get_flags", 00:04:50.225 "log_clear_flag", 00:04:50.225 "log_set_flag", 00:04:50.225 "log_get_level", 00:04:50.225 "log_set_level", 00:04:50.225 "log_get_print_level", 00:04:50.225 "log_set_print_level", 00:04:50.225 "framework_enable_cpumask_locks", 00:04:50.225 "framework_disable_cpumask_locks", 00:04:50.225 "framework_wait_init", 00:04:50.225 "framework_start_init", 00:04:50.225 "scsi_get_devices", 00:04:50.225 "bdev_get_histogram", 00:04:50.225 "bdev_enable_histogram", 00:04:50.225 "bdev_set_qos_limit", 00:04:50.225 "bdev_set_qd_sampling_period", 00:04:50.225 "bdev_get_bdevs", 00:04:50.225 "bdev_reset_iostat", 00:04:50.225 "bdev_get_iostat", 00:04:50.225 "bdev_examine", 00:04:50.225 "bdev_wait_for_examine", 00:04:50.225 "bdev_set_options", 00:04:50.225 "notify_get_notifications", 00:04:50.225 "notify_get_types", 00:04:50.225 "accel_get_stats", 00:04:50.225 "accel_set_options", 00:04:50.225 "accel_set_driver", 00:04:50.225 "accel_crypto_key_destroy", 00:04:50.225 "accel_crypto_keys_get", 00:04:50.225 "accel_crypto_key_create", 00:04:50.225 "accel_assign_opc", 00:04:50.225 "accel_get_module_info", 00:04:50.225 "accel_get_opc_assignments", 00:04:50.225 "vmd_rescan", 00:04:50.225 "vmd_remove_device", 00:04:50.225 "vmd_enable", 00:04:50.225 "sock_get_default_impl", 00:04:50.225 "sock_set_default_impl", 00:04:50.225 "sock_impl_set_options", 00:04:50.225 "sock_impl_get_options", 00:04:50.225 "iobuf_get_stats", 00:04:50.225 "iobuf_set_options", 00:04:50.225 "keyring_get_keys", 00:04:50.225 "framework_get_pci_devices", 00:04:50.225 "framework_get_config", 00:04:50.225 "framework_get_subsystems", 00:04:50.225 "vfu_tgt_set_base_path", 00:04:50.225 "trace_get_info", 00:04:50.225 "trace_get_tpoint_group_mask", 00:04:50.225 "trace_disable_tpoint_group", 00:04:50.225 "trace_enable_tpoint_group", 00:04:50.225 "trace_clear_tpoint_mask", 00:04:50.225 "trace_set_tpoint_mask", 00:04:50.225 "spdk_get_version", 00:04:50.225 "rpc_get_methods" 00:04:50.225 ] 00:04:50.225 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.225 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.225 11:41:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 805401 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 805401 ']' 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 805401 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 805401 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 805401' 00:04:50.225 killing process with pid 805401 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 805401 00:04:50.225 11:41:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 805401 00:04:50.793 00:04:50.793 real 0m1.258s 00:04:50.793 user 0m2.192s 00:04:50.793 sys 0m0.425s 00:04:50.793 11:41:39 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.793 11:41:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.793 ************************************ 00:04:50.793 END TEST spdkcli_tcp 00:04:50.793 ************************************ 00:04:50.793 11:41:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.793 11:41:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:50.793 11:41:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.793 11:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:50.793 ************************************ 00:04:50.793 START TEST dpdk_mem_utility 00:04:50.793 ************************************ 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.793 * Looking for test storage... 00:04:50.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:50.793 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:50.793 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=805607 00:04:50.793 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.793 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 805607 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 805607 ']' 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:50.793 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.793 [2024-07-12 11:41:40.144419] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:50.793 [2024-07-12 11:41:40.144513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805607 ] 00:04:50.793 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.793 [2024-07-12 11:41:40.204734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.052 [2024-07-12 11:41:40.312010] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.312 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:51.312 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:04:51.312 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.312 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.312 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.312 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.312 { 00:04:51.312 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.312 } 00:04:51.312 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.312 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.312 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:51.312 1 heaps totaling size 814.000000 MiB 00:04:51.312 size: 814.000000 MiB heap id: 0 00:04:51.312 end heaps---------- 00:04:51.312 8 mempools totaling size 598.116089 MiB 00:04:51.312 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.312 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.312 size: 84.521057 MiB name: bdev_io_805607 00:04:51.312 size: 51.011292 MiB name: evtpool_805607 00:04:51.312 size: 50.003479 MiB name: msgpool_805607 00:04:51.312 size: 21.763794 MiB name: PDU_Pool 00:04:51.312 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.312 size: 0.026123 MiB name: Session_Pool 00:04:51.312 end mempools------- 00:04:51.312 6 memzones totaling size 4.142822 MiB 00:04:51.312 size: 1.000366 MiB name: RG_ring_0_805607 00:04:51.312 size: 1.000366 MiB name: RG_ring_1_805607 00:04:51.312 size: 1.000366 MiB name: RG_ring_4_805607 00:04:51.312 size: 1.000366 MiB name: RG_ring_5_805607 00:04:51.312 size: 0.125366 MiB name: RG_ring_2_805607 00:04:51.312 size: 0.015991 MiB name: RG_ring_3_805607 00:04:51.312 end memzones------- 00:04:51.312 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.312 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:51.312 list of free elements. size: 12.519348 MiB 00:04:51.312 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:51.312 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:51.312 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:51.312 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:51.312 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:51.312 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:51.312 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:51.312 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:51.312 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:51.312 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:51.312 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:51.312 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:51.312 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:51.312 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:51.312 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:51.312 list of standard malloc elements. size: 199.218079 MiB 00:04:51.312 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:51.312 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:51.312 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:51.312 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:51.312 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:51.312 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:51.313 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:51.313 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:51.313 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:51.313 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:51.313 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:51.313 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:51.313 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:51.313 list of memzone associated elements. size: 602.262573 MiB 00:04:51.313 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:51.313 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.313 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:51.313 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.313 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:51.313 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_805607_0 00:04:51.313 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:51.313 associated memzone info: size: 48.002930 MiB name: MP_evtpool_805607_0 00:04:51.313 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:51.313 associated memzone info: size: 48.002930 MiB name: MP_msgpool_805607_0 00:04:51.313 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:51.313 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.313 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:51.313 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.313 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:51.313 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_805607 00:04:51.313 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:51.313 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_805607 00:04:51.313 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:51.313 associated memzone info: size: 1.007996 MiB name: MP_evtpool_805607 00:04:51.313 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:51.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.313 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:51.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.313 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:51.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.313 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:51.313 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.313 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:51.313 associated memzone info: size: 1.000366 MiB name: RG_ring_0_805607 00:04:51.313 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:51.313 associated memzone info: size: 1.000366 MiB name: RG_ring_1_805607 00:04:51.313 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:51.313 associated memzone info: size: 1.000366 MiB name: RG_ring_4_805607 00:04:51.313 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:51.313 associated memzone info: size: 1.000366 MiB name: RG_ring_5_805607 00:04:51.313 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:51.313 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_805607 00:04:51.313 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:51.313 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.313 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:51.313 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.313 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:51.313 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.313 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:51.313 associated memzone info: size: 0.125366 MiB name: RG_ring_2_805607 00:04:51.313 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:51.313 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.313 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:51.313 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.313 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:51.313 associated memzone info: size: 0.015991 MiB name: RG_ring_3_805607 00:04:51.313 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:51.313 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.313 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:51.313 associated memzone info: size: 0.000183 MiB name: MP_msgpool_805607 00:04:51.313 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:51.313 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_805607 00:04:51.313 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:51.313 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.313 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.313 11:41:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 805607 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 805607 ']' 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 805607 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 805607 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:51.313 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 805607' 00:04:51.314 killing process with pid 805607 00:04:51.314 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 805607 00:04:51.314 11:41:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 805607 00:04:51.884 00:04:51.884 real 0m1.090s 00:04:51.884 user 0m1.068s 00:04:51.884 sys 0m0.387s 00:04:51.884 11:41:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.884 11:41:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.884 ************************************ 00:04:51.884 END TEST dpdk_mem_utility 00:04:51.884 ************************************ 00:04:51.884 11:41:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.884 11:41:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.884 11:41:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.884 11:41:41 -- common/autotest_common.sh@10 -- # set +x 00:04:51.884 ************************************ 00:04:51.884 START TEST event 00:04:51.884 ************************************ 00:04:51.884 11:41:41 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.884 * Looking for test storage... 00:04:51.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.884 11:41:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:51.884 11:41:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.884 11:41:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.884 11:41:41 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:51.884 11:41:41 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.884 11:41:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.884 ************************************ 00:04:51.884 START TEST event_perf 00:04:51.884 ************************************ 00:04:51.884 11:41:41 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.884 Running I/O for 1 seconds...[2024-07-12 11:41:41.273361] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:51.884 [2024-07-12 11:41:41.273434] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805798 ] 00:04:51.884 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.884 [2024-07-12 11:41:41.333157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.143 [2024-07-12 11:41:41.447050] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.143 [2024-07-12 11:41:41.447106] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.143 [2024-07-12 11:41:41.447174] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.143 [2024-07-12 11:41:41.447177] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.114 Running I/O for 1 seconds... 00:04:53.114 lcore 0: 234859 00:04:53.114 lcore 1: 234858 00:04:53.114 lcore 2: 234858 00:04:53.114 lcore 3: 234859 00:04:53.114 done. 00:04:53.114 00:04:53.114 real 0m1.302s 00:04:53.114 user 0m4.208s 00:04:53.114 sys 0m0.088s 00:04:53.114 11:41:42 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.114 11:41:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.114 ************************************ 00:04:53.114 END TEST event_perf 00:04:53.114 ************************************ 00:04:53.114 11:41:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.114 11:41:42 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:53.114 11:41:42 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.114 11:41:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.371 ************************************ 00:04:53.371 START TEST event_reactor 00:04:53.371 ************************************ 00:04:53.371 11:41:42 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.371 [2024-07-12 11:41:42.621472] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:53.371 [2024-07-12 11:41:42.621530] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805955 ] 00:04:53.371 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.371 [2024-07-12 11:41:42.678994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.371 [2024-07-12 11:41:42.780601] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.745 test_start 00:04:54.746 oneshot 00:04:54.746 tick 100 00:04:54.746 tick 100 00:04:54.746 tick 250 00:04:54.746 tick 100 00:04:54.746 tick 100 00:04:54.746 tick 100 00:04:54.746 tick 250 00:04:54.746 tick 500 00:04:54.746 tick 100 00:04:54.746 tick 100 00:04:54.746 tick 250 00:04:54.746 tick 100 00:04:54.746 tick 100 00:04:54.746 test_end 00:04:54.746 00:04:54.746 real 0m1.279s 00:04:54.746 user 0m1.208s 00:04:54.746 sys 0m0.067s 00:04:54.746 11:41:43 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.746 11:41:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 ************************************ 00:04:54.746 END TEST event_reactor 00:04:54.746 ************************************ 00:04:54.746 11:41:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.746 11:41:43 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:54.746 11:41:43 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.746 11:41:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 ************************************ 00:04:54.746 START TEST event_reactor_perf 00:04:54.746 ************************************ 00:04:54.746 11:41:43 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.746 [2024-07-12 11:41:43.951681] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:54.746 [2024-07-12 11:41:43.951752] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806123 ] 00:04:54.746 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.746 [2024-07-12 11:41:44.010531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.746 [2024-07-12 11:41:44.116990] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.125 test_start 00:04:56.125 test_end 00:04:56.125 Performance: 442192 events per second 00:04:56.125 00:04:56.125 real 0m1.290s 00:04:56.125 user 0m1.201s 00:04:56.125 sys 0m0.084s 00:04:56.125 11:41:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.125 11:41:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 ************************************ 00:04:56.125 END TEST event_reactor_perf 00:04:56.125 ************************************ 00:04:56.125 11:41:45 event -- event/event.sh@49 -- # uname -s 00:04:56.125 11:41:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.125 11:41:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.125 11:41:45 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.125 11:41:45 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.125 11:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 ************************************ 00:04:56.125 START TEST event_scheduler 00:04:56.125 ************************************ 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.125 * Looking for test storage... 00:04:56.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=806316 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 806316 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 806316 ']' 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 [2024-07-12 11:41:45.375111] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:04:56.125 [2024-07-12 11:41:45.375206] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806316 ] 00:04:56.125 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.125 [2024-07-12 11:41:45.432862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.125 [2024-07-12 11:41:45.541656] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.125 [2024-07-12 11:41:45.541813] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.125 [2024-07-12 11:41:45.541742] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.125 [2024-07-12 11:41:45.541810] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:04:56.125 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.125 11:41:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 POWER: Env isn't set yet! 00:04:56.125 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:56.125 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:56.125 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:56.125 POWER: Cannot get available frequencies of lcore 0 00:04:56.125 POWER: Attempting to initialise PSTAT power management... 00:04:56.125 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:56.125 POWER: Initialized successfully for lcore 0 power management 00:04:56.125 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:56.125 POWER: Initialized successfully for lcore 1 power management 00:04:56.385 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:56.385 POWER: Initialized successfully for lcore 2 power management 00:04:56.385 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:56.385 POWER: Initialized successfully for lcore 3 power management 00:04:56.385 [2024-07-12 11:41:45.633123] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.385 [2024-07-12 11:41:45.633144] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.385 [2024-07-12 11:41:45.633156] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 [2024-07-12 11:41:45.737360] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 ************************************ 00:04:56.385 START TEST scheduler_create_thread 00:04:56.385 ************************************ 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 2 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 3 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 4 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 5 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 6 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 7 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 8 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 9 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 10 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.385 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.386 11:41:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.321 11:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:57.321 11:41:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:57.321 11:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:57.321 11:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.699 11:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:58.699 11:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.699 11:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.699 11:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:58.699 11:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.074 11:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.074 00:05:00.074 real 0m3.381s 00:05:00.074 user 0m0.011s 00:05:00.074 sys 0m0.004s 00:05:00.074 11:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.074 11:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.074 ************************************ 00:05:00.074 END TEST scheduler_create_thread 00:05:00.074 ************************************ 00:05:00.074 11:41:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.074 11:41:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 806316 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 806316 ']' 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 806316 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 806316 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 806316' 00:05:00.074 killing process with pid 806316 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 806316 00:05:00.074 11:41:49 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 806316 00:05:00.074 [2024-07-12 11:41:49.526270] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.334 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:00.334 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:00.334 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:00.334 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:00.334 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:00.334 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:00.334 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:00.334 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:00.593 00:05:00.593 real 0m4.558s 00:05:00.593 user 0m8.076s 00:05:00.593 sys 0m0.342s 00:05:00.593 11:41:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.593 11:41:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 END TEST event_scheduler 00:05:00.593 ************************************ 00:05:00.593 11:41:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.593 11:41:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.593 11:41:49 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.593 11:41:49 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.593 11:41:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 START TEST app_repeat 00:05:00.593 ************************************ 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=806912 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 806912' 00:05:00.593 Process app_repeat pid: 806912 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.593 spdk_app_start Round 0 00:05:00.593 11:41:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 806912 /var/tmp/spdk-nbd.sock 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 806912 ']' 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.593 11:41:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 [2024-07-12 11:41:49.913992] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:00.593 [2024-07-12 11:41:49.914053] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806912 ] 00:05:00.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.593 [2024-07-12 11:41:49.979259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.851 [2024-07-12 11:41:50.100965] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.851 [2024-07-12 11:41:50.100970] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.851 11:41:50 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:00.851 11:41:50 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:00.851 11:41:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.108 Malloc0 00:05:01.108 11:41:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.366 Malloc1 00:05:01.366 11:41:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.366 11:41:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.625 /dev/nbd0 00:05:01.625 11:41:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.625 11:41:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:01.625 11:41:50 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.625 1+0 records in 00:05:01.625 1+0 records out 00:05:01.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149445 s, 27.4 MB/s 00:05:01.625 11:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.625 11:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:01.625 11:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.625 11:41:51 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:01.625 11:41:51 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:01.625 11:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.625 11:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.625 11:41:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.883 /dev/nbd1 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.883 1+0 records in 00:05:01.883 1+0 records out 00:05:01.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197011 s, 20.8 MB/s 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:01.883 11:41:51 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.883 11:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.141 { 00:05:02.141 "nbd_device": "/dev/nbd0", 00:05:02.141 "bdev_name": "Malloc0" 00:05:02.141 }, 00:05:02.141 { 00:05:02.141 "nbd_device": "/dev/nbd1", 00:05:02.141 "bdev_name": "Malloc1" 00:05:02.141 } 00:05:02.141 ]' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.141 { 00:05:02.141 "nbd_device": "/dev/nbd0", 00:05:02.141 "bdev_name": "Malloc0" 00:05:02.141 }, 00:05:02.141 { 00:05:02.141 "nbd_device": "/dev/nbd1", 00:05:02.141 "bdev_name": "Malloc1" 00:05:02.141 } 00:05:02.141 ]' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.141 /dev/nbd1' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.141 /dev/nbd1' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.141 256+0 records in 00:05:02.141 256+0 records out 00:05:02.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491793 s, 213 MB/s 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.141 256+0 records in 00:05:02.141 256+0 records out 00:05:02.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234692 s, 44.7 MB/s 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.141 256+0 records in 00:05:02.141 256+0 records out 00:05:02.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021895 s, 47.9 MB/s 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.141 11:41:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.400 11:41:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.658 11:41:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.916 11:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.174 11:41:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.174 11:41:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.431 11:41:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.691 [2024-07-12 11:41:53.015521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.691 [2024-07-12 11:41:53.130749] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.691 [2024-07-12 11:41:53.130751] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.951 [2024-07-12 11:41:53.192372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.951 [2024-07-12 11:41:53.192436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.486 11:41:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.486 11:41:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.486 spdk_app_start Round 1 00:05:06.486 11:41:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 806912 /var/tmp/spdk-nbd.sock 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 806912 ']' 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:06.486 11:41:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.745 11:41:55 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:06.745 11:41:55 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:06.745 11:41:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.745 Malloc0 00:05:07.004 11:41:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.004 Malloc1 00:05:07.264 11:41:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.264 11:41:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.522 /dev/nbd0 00:05:07.522 11:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.522 11:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.522 1+0 records in 00:05:07.522 1+0 records out 00:05:07.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222477 s, 18.4 MB/s 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:07.522 11:41:56 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:07.522 11:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.522 11:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.523 11:41:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.779 /dev/nbd1 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.780 1+0 records in 00:05:07.780 1+0 records out 00:05:07.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199568 s, 20.5 MB/s 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:07.780 11:41:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.780 11:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.038 { 00:05:08.038 "nbd_device": "/dev/nbd0", 00:05:08.038 "bdev_name": "Malloc0" 00:05:08.038 }, 00:05:08.038 { 00:05:08.038 "nbd_device": "/dev/nbd1", 00:05:08.038 "bdev_name": "Malloc1" 00:05:08.038 } 00:05:08.038 ]' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.038 { 00:05:08.038 "nbd_device": "/dev/nbd0", 00:05:08.038 "bdev_name": "Malloc0" 00:05:08.038 }, 00:05:08.038 { 00:05:08.038 "nbd_device": "/dev/nbd1", 00:05:08.038 "bdev_name": "Malloc1" 00:05:08.038 } 00:05:08.038 ]' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.038 /dev/nbd1' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.038 /dev/nbd1' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.038 256+0 records in 00:05:08.038 256+0 records out 00:05:08.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485064 s, 216 MB/s 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.038 256+0 records in 00:05:08.038 256+0 records out 00:05:08.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209383 s, 50.1 MB/s 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.038 256+0 records in 00:05:08.038 256+0 records out 00:05:08.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229564 s, 45.7 MB/s 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.038 11:41:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.295 11:41:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.296 11:41:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.552 11:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.809 11:41:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.809 11:41:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.066 11:41:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.346 [2024-07-12 11:41:58.776604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.603 [2024-07-12 11:41:58.891664] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.603 [2024-07-12 11:41:58.891669] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.603 [2024-07-12 11:41:58.949987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.603 [2024-07-12 11:41:58.950052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.149 11:42:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.149 11:42:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.149 spdk_app_start Round 2 00:05:12.149 11:42:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 806912 /var/tmp/spdk-nbd.sock 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 806912 ']' 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:12.149 11:42:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.407 11:42:01 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:12.407 11:42:01 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:12.407 11:42:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.664 Malloc0 00:05:12.664 11:42:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.922 Malloc1 00:05:12.922 11:42:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.922 11:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.180 /dev/nbd0 00:05:13.180 11:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.180 11:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.180 1+0 records in 00:05:13.180 1+0 records out 00:05:13.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165767 s, 24.7 MB/s 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:13.180 11:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:13.180 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.180 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.180 11:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.439 /dev/nbd1 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.439 1+0 records in 00:05:13.439 1+0 records out 00:05:13.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021504 s, 19.0 MB/s 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:13.439 11:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.439 11:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.697 { 00:05:13.697 "nbd_device": "/dev/nbd0", 00:05:13.697 "bdev_name": "Malloc0" 00:05:13.697 }, 00:05:13.697 { 00:05:13.697 "nbd_device": "/dev/nbd1", 00:05:13.697 "bdev_name": "Malloc1" 00:05:13.697 } 00:05:13.697 ]' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.697 { 00:05:13.697 "nbd_device": "/dev/nbd0", 00:05:13.697 "bdev_name": "Malloc0" 00:05:13.697 }, 00:05:13.697 { 00:05:13.697 "nbd_device": "/dev/nbd1", 00:05:13.697 "bdev_name": "Malloc1" 00:05:13.697 } 00:05:13.697 ]' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.697 /dev/nbd1' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.697 /dev/nbd1' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.697 11:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.698 11:42:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.698 256+0 records in 00:05:13.698 256+0 records out 00:05:13.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492975 s, 213 MB/s 00:05:13.698 11:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.698 11:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.956 256+0 records in 00:05:13.956 256+0 records out 00:05:13.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020969 s, 50.0 MB/s 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.956 256+0 records in 00:05:13.956 256+0 records out 00:05:13.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223207 s, 47.0 MB/s 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.956 11:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.213 11:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.470 11:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.727 11:42:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.727 11:42:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.986 11:42:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.244 [2024-07-12 11:42:04.623006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.502 [2024-07-12 11:42:04.740286] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.502 [2024-07-12 11:42:04.740287] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.502 [2024-07-12 11:42:04.803065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.502 [2024-07-12 11:42:04.803139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.026 11:42:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 806912 /var/tmp/spdk-nbd.sock 00:05:18.026 11:42:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 806912 ']' 00:05:18.026 11:42:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.026 11:42:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:18.026 11:42:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.027 11:42:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:18.027 11:42:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:18.285 11:42:07 event.app_repeat -- event/event.sh@39 -- # killprocess 806912 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 806912 ']' 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 806912 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 806912 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 806912' 00:05:18.285 killing process with pid 806912 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@968 -- # kill 806912 00:05:18.285 11:42:07 event.app_repeat -- common/autotest_common.sh@973 -- # wait 806912 00:05:18.543 spdk_app_start is called in Round 0. 00:05:18.543 Shutdown signal received, stop current app iteration 00:05:18.543 Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 reinitialization... 00:05:18.543 spdk_app_start is called in Round 1. 00:05:18.543 Shutdown signal received, stop current app iteration 00:05:18.543 Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 reinitialization... 00:05:18.543 spdk_app_start is called in Round 2. 00:05:18.543 Shutdown signal received, stop current app iteration 00:05:18.543 Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 reinitialization... 00:05:18.543 spdk_app_start is called in Round 3. 00:05:18.543 Shutdown signal received, stop current app iteration 00:05:18.543 11:42:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.543 11:42:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.543 00:05:18.543 real 0m17.993s 00:05:18.543 user 0m38.900s 00:05:18.543 sys 0m3.195s 00:05:18.543 11:42:07 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.543 11:42:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 END TEST app_repeat 00:05:18.543 ************************************ 00:05:18.543 11:42:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.543 11:42:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.543 11:42:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.543 11:42:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.543 11:42:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 START TEST cpu_locks 00:05:18.543 ************************************ 00:05:18.543 11:42:07 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.543 * Looking for test storage... 00:05:18.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:18.543 11:42:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.543 11:42:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.543 11:42:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.543 11:42:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.543 11:42:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.543 11:42:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.543 11:42:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.544 ************************************ 00:05:18.544 START TEST default_locks 00:05:18.544 ************************************ 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=809741 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 809741 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 809741 ']' 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:18.544 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.802 [2024-07-12 11:42:08.058379] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:18.802 [2024-07-12 11:42:08.058475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809741 ] 00:05:18.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.802 [2024-07-12 11:42:08.116289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.802 [2024-07-12 11:42:08.228831] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.060 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:19.060 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:19.060 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 809741 00:05:19.060 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 809741 00:05:19.060 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.318 lslocks: write error 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 809741 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 809741 ']' 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 809741 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 809741 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 809741' 00:05:19.318 killing process with pid 809741 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 809741 00:05:19.318 11:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 809741 00:05:19.884 11:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 809741 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 809741 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 809741 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 809741 ']' 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (809741) - No such process 00:05:19.885 ERROR: process (pid: 809741) is no longer running 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.885 00:05:19.885 real 0m1.231s 00:05:19.885 user 0m1.150s 00:05:19.885 sys 0m0.543s 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.885 11:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.885 ************************************ 00:05:19.885 END TEST default_locks 00:05:19.885 ************************************ 00:05:19.885 11:42:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.885 11:42:09 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.885 11:42:09 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.885 11:42:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.885 ************************************ 00:05:19.885 START TEST default_locks_via_rpc 00:05:19.885 ************************************ 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=810129 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 810129 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 810129 ']' 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:19.885 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.885 [2024-07-12 11:42:09.334560] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:19.885 [2024-07-12 11:42:09.334635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810129 ] 00:05:19.885 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.143 [2024-07-12 11:42:09.392233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.143 [2024-07-12 11:42:09.500624] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 810129 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 810129 00:05:20.401 11:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 810129 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 810129 ']' 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 810129 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 810129 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 810129' 00:05:20.659 killing process with pid 810129 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 810129 00:05:20.659 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 810129 00:05:21.226 00:05:21.226 real 0m1.312s 00:05:21.226 user 0m1.255s 00:05:21.226 sys 0m0.538s 00:05:21.226 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.226 11:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.226 ************************************ 00:05:21.226 END TEST default_locks_via_rpc 00:05:21.226 ************************************ 00:05:21.226 11:42:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:21.226 11:42:10 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.226 11:42:10 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.226 11:42:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.226 ************************************ 00:05:21.226 START TEST non_locking_app_on_locked_coremask 00:05:21.226 ************************************ 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=810296 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 810296 /var/tmp/spdk.sock 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 810296 ']' 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.226 11:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.226 [2024-07-12 11:42:10.692461] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:21.226 [2024-07-12 11:42:10.692537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810296 ] 00:05:21.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.485 [2024-07-12 11:42:10.750295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.485 [2024-07-12 11:42:10.861455] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=810307 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 810307 /var/tmp/spdk2.sock 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 810307 ']' 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.744 11:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.744 [2024-07-12 11:42:11.166603] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:21.744 [2024-07-12 11:42:11.166678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810307 ] 00:05:21.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.002 [2024-07-12 11:42:11.263951] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.002 [2024-07-12 11:42:11.263985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.260 [2024-07-12 11:42:11.499239] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.824 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:22.824 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:22.824 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 810296 00:05:22.824 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 810296 00:05:22.824 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.390 lslocks: write error 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 810296 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 810296 ']' 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 810296 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 810296 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 810296' 00:05:23.390 killing process with pid 810296 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 810296 00:05:23.390 11:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 810296 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 810307 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 810307 ']' 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 810307 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 810307 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 810307' 00:05:24.327 killing process with pid 810307 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 810307 00:05:24.327 11:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 810307 00:05:24.585 00:05:24.585 real 0m3.405s 00:05:24.585 user 0m3.559s 00:05:24.585 sys 0m1.062s 00:05:24.585 11:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.585 11:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.585 ************************************ 00:05:24.585 END TEST non_locking_app_on_locked_coremask 00:05:24.585 ************************************ 00:05:24.585 11:42:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.585 11:42:14 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.585 11:42:14 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.585 11:42:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.843 ************************************ 00:05:24.843 START TEST locking_app_on_unlocked_coremask 00:05:24.843 ************************************ 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=810731 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 810731 /var/tmp/spdk.sock 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 810731 ']' 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:24.843 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.843 [2024-07-12 11:42:14.144895] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:24.843 [2024-07-12 11:42:14.144982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810731 ] 00:05:24.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.843 [2024-07-12 11:42:14.201808] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.843 [2024-07-12 11:42:14.201860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.843 [2024-07-12 11:42:14.310498] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.101 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=810757 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 810757 /var/tmp/spdk2.sock 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 810757 ']' 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:25.102 11:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.360 [2024-07-12 11:42:14.618509] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:25.360 [2024-07-12 11:42:14.618581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810757 ] 00:05:25.360 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.360 [2024-07-12 11:42:14.709232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.618 [2024-07-12 11:42:14.947768] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.210 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:26.210 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:26.210 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 810757 00:05:26.210 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 810757 00:05:26.210 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.468 lslocks: write error 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 810731 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 810731 ']' 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 810731 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 810731 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 810731' 00:05:26.468 killing process with pid 810731 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 810731 00:05:26.468 11:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 810731 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 810757 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 810757 ']' 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 810757 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 810757 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 810757' 00:05:27.439 killing process with pid 810757 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 810757 00:05:27.439 11:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 810757 00:05:28.004 00:05:28.004 real 0m3.276s 00:05:28.004 user 0m3.403s 00:05:28.004 sys 0m1.043s 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.004 ************************************ 00:05:28.004 END TEST locking_app_on_unlocked_coremask 00:05:28.004 ************************************ 00:05:28.004 11:42:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.004 11:42:17 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.004 11:42:17 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.004 11:42:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.004 ************************************ 00:05:28.004 START TEST locking_app_on_locked_coremask 00:05:28.004 ************************************ 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=811172 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 811172 /var/tmp/spdk.sock 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 811172 ']' 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.004 11:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.004 [2024-07-12 11:42:17.475993] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:28.004 [2024-07-12 11:42:17.476095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811172 ] 00:05:28.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.262 [2024-07-12 11:42:17.538375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.262 [2024-07-12 11:42:17.655891] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=811308 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 811308 /var/tmp/spdk2.sock 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 811308 /var/tmp/spdk2.sock 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 811308 /var/tmp/spdk2.sock 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 811308 ']' 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:29.194 11:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.194 [2024-07-12 11:42:18.457443] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:29.194 [2024-07-12 11:42:18.457536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811308 ] 00:05:29.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.194 [2024-07-12 11:42:18.554801] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 811172 has claimed it. 00:05:29.194 [2024-07-12 11:42:18.554884] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (811308) - No such process 00:05:29.759 ERROR: process (pid: 811308) is no longer running 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 811172 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 811172 00:05:29.759 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.016 lslocks: write error 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 811172 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 811172 ']' 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 811172 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 811172 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 811172' 00:05:30.016 killing process with pid 811172 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 811172 00:05:30.016 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 811172 00:05:30.581 00:05:30.581 real 0m2.517s 00:05:30.581 user 0m2.849s 00:05:30.581 sys 0m0.670s 00:05:30.582 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.582 11:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.582 ************************************ 00:05:30.582 END TEST locking_app_on_locked_coremask 00:05:30.582 ************************************ 00:05:30.582 11:42:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.582 11:42:19 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.582 11:42:19 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.582 11:42:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.582 ************************************ 00:05:30.582 START TEST locking_overlapped_coremask 00:05:30.582 ************************************ 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=811481 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 811481 /var/tmp/spdk.sock 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 811481 ']' 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:30.582 11:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.582 [2024-07-12 11:42:20.041000] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:30.582 [2024-07-12 11:42:20.041095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811481 ] 00:05:30.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.840 [2024-07-12 11:42:20.106268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.840 [2024-07-12 11:42:20.227962] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.840 [2024-07-12 11:42:20.227997] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.840 [2024-07-12 11:42:20.228001] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=811614 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 811614 /var/tmp/spdk2.sock 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 811614 /var/tmp/spdk2.sock 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 811614 /var/tmp/spdk2.sock 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 811614 ']' 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:31.774 11:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.774 [2024-07-12 11:42:21.021610] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:31.774 [2024-07-12 11:42:21.021706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811614 ] 00:05:31.774 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.774 [2024-07-12 11:42:21.111877] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 811481 has claimed it. 00:05:31.774 [2024-07-12 11:42:21.111939] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (811614) - No such process 00:05:32.340 ERROR: process (pid: 811614) is no longer running 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 811481 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 811481 ']' 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 811481 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 811481 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 811481' 00:05:32.340 killing process with pid 811481 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 811481 00:05:32.340 11:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 811481 00:05:32.906 00:05:32.906 real 0m2.207s 00:05:32.906 user 0m6.157s 00:05:32.906 sys 0m0.487s 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.906 ************************************ 00:05:32.906 END TEST locking_overlapped_coremask 00:05:32.906 ************************************ 00:05:32.906 11:42:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.906 11:42:22 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.906 11:42:22 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.906 11:42:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.906 ************************************ 00:05:32.906 START TEST locking_overlapped_coremask_via_rpc 00:05:32.906 ************************************ 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=811783 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 811783 /var/tmp/spdk.sock 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 811783 ']' 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.906 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.906 [2024-07-12 11:42:22.293840] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:32.906 [2024-07-12 11:42:22.293934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811783 ] 00:05:32.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.906 [2024-07-12 11:42:22.352441] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.906 [2024-07-12 11:42:22.352483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.165 [2024-07-12 11:42:22.465557] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.165 [2024-07-12 11:42:22.465616] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.165 [2024-07-12 11:42:22.465620] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=811910 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 811910 /var/tmp/spdk2.sock 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 811910 ']' 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.423 11:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.423 [2024-07-12 11:42:22.771108] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:33.423 [2024-07-12 11:42:22.771202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811910 ] 00:05:33.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.423 [2024-07-12 11:42:22.859453] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.423 [2024-07-12 11:42:22.859487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.681 [2024-07-12 11:42:23.083361] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.681 [2024-07-12 11:42:23.083426] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:05:33.681 [2024-07-12 11:42:23.083428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.247 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.247 [2024-07-12 11:42:23.731977] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 811783 has claimed it. 00:05:34.504 request: 00:05:34.504 { 00:05:34.504 "method": "framework_enable_cpumask_locks", 00:05:34.504 "req_id": 1 00:05:34.504 } 00:05:34.504 Got JSON-RPC error response 00:05:34.504 response: 00:05:34.504 { 00:05:34.504 "code": -32603, 00:05:34.504 "message": "Failed to claim CPU core: 2" 00:05:34.504 } 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 811783 /var/tmp/spdk.sock 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 811783 ']' 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 811910 /var/tmp/spdk2.sock 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 811910 ']' 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:34.504 11:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.762 00:05:34.762 real 0m1.980s 00:05:34.762 user 0m1.014s 00:05:34.762 sys 0m0.178s 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.762 11:42:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.762 ************************************ 00:05:34.762 END TEST locking_overlapped_coremask_via_rpc 00:05:34.762 ************************************ 00:05:34.762 11:42:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.762 11:42:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 811783 ]] 00:05:34.762 11:42:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 811783 00:05:34.762 11:42:24 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 811783 ']' 00:05:34.762 11:42:24 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 811783 00:05:34.762 11:42:24 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:05:34.762 11:42:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:34.762 11:42:24 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 811783 00:05:35.020 11:42:24 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:35.020 11:42:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:35.020 11:42:24 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 811783' 00:05:35.020 killing process with pid 811783 00:05:35.020 11:42:24 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 811783 00:05:35.020 11:42:24 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 811783 00:05:35.278 11:42:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 811910 ]] 00:05:35.278 11:42:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 811910 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 811910 ']' 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 811910 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 811910 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 811910' 00:05:35.278 killing process with pid 811910 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 811910 00:05:35.278 11:42:24 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 811910 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 811783 ]] 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 811783 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 811783 ']' 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 811783 00:05:35.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (811783) - No such process 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 811783 is not found' 00:05:35.845 Process with pid 811783 is not found 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 811910 ]] 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 811910 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 811910 ']' 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 811910 00:05:35.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (811910) - No such process 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 811910 is not found' 00:05:35.845 Process with pid 811910 is not found 00:05:35.845 11:42:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.845 00:05:35.845 real 0m17.264s 00:05:35.845 user 0m30.238s 00:05:35.845 sys 0m5.411s 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.845 11:42:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.845 ************************************ 00:05:35.845 END TEST cpu_locks 00:05:35.845 ************************************ 00:05:35.845 00:05:35.845 real 0m44.039s 00:05:35.845 user 1m23.963s 00:05:35.845 sys 0m9.429s 00:05:35.845 11:42:25 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.845 11:42:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.845 ************************************ 00:05:35.845 END TEST event 00:05:35.845 ************************************ 00:05:35.845 11:42:25 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:35.845 11:42:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.845 11:42:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.845 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.845 ************************************ 00:05:35.845 START TEST thread 00:05:35.845 ************************************ 00:05:35.845 11:42:25 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:35.845 * Looking for test storage... 00:05:35.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:35.845 11:42:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.845 11:42:25 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:05:35.845 11:42:25 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.845 11:42:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.104 ************************************ 00:05:36.104 START TEST thread_poller_perf 00:05:36.104 ************************************ 00:05:36.104 11:42:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.104 [2024-07-12 11:42:25.354430] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:36.104 [2024-07-12 11:42:25.354494] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812273 ] 00:05:36.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.104 [2024-07-12 11:42:25.420240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.104 [2024-07-12 11:42:25.535959] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.104 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.477 ====================================== 00:05:37.477 busy:2714653288 (cyc) 00:05:37.477 total_run_count: 291000 00:05:37.477 tsc_hz: 2700000000 (cyc) 00:05:37.477 ====================================== 00:05:37.477 poller_cost: 9328 (cyc), 3454 (nsec) 00:05:37.477 00:05:37.477 real 0m1.325s 00:05:37.477 user 0m1.243s 00:05:37.477 sys 0m0.077s 00:05:37.477 11:42:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.477 11:42:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.477 ************************************ 00:05:37.477 END TEST thread_poller_perf 00:05:37.477 ************************************ 00:05:37.477 11:42:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.477 11:42:26 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:05:37.477 11:42:26 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.477 11:42:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.477 ************************************ 00:05:37.477 START TEST thread_poller_perf 00:05:37.477 ************************************ 00:05:37.477 11:42:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.477 [2024-07-12 11:42:26.727494] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:37.477 [2024-07-12 11:42:26.727553] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812439 ] 00:05:37.477 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.477 [2024-07-12 11:42:26.789737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.477 [2024-07-12 11:42:26.904661] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:38.851 ====================================== 00:05:38.851 busy:2702891664 (cyc) 00:05:38.851 total_run_count: 3822000 00:05:38.851 tsc_hz: 2700000000 (cyc) 00:05:38.851 ====================================== 00:05:38.851 poller_cost: 707 (cyc), 261 (nsec) 00:05:38.851 00:05:38.851 real 0m1.312s 00:05:38.851 user 0m1.231s 00:05:38.851 sys 0m0.075s 00:05:38.851 11:42:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.851 11:42:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.851 ************************************ 00:05:38.851 END TEST thread_poller_perf 00:05:38.851 ************************************ 00:05:38.851 11:42:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.851 00:05:38.851 real 0m2.786s 00:05:38.851 user 0m2.537s 00:05:38.851 sys 0m0.250s 00:05:38.851 11:42:28 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.851 11:42:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.851 ************************************ 00:05:38.851 END TEST thread 00:05:38.851 ************************************ 00:05:38.851 11:42:28 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:38.851 11:42:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:38.851 11:42:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.851 11:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:38.851 ************************************ 00:05:38.851 START TEST accel 00:05:38.851 ************************************ 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:38.851 * Looking for test storage... 00:05:38.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:38.851 11:42:28 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:38.851 11:42:28 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:38.851 11:42:28 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.851 11:42:28 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=812664 00:05:38.851 11:42:28 accel -- accel/accel.sh@63 -- # waitforlisten 812664 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@830 -- # '[' -z 812664 ']' 00:05:38.851 11:42:28 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:38.851 11:42:28 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:38.851 11:42:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.851 11:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.851 11:42:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.851 11:42:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.851 11:42:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.851 11:42:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.851 11:42:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.851 11:42:28 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.851 [2024-07-12 11:42:28.201133] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:38.851 [2024-07-12 11:42:28.201240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812664 ] 00:05:38.851 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.851 [2024-07-12 11:42:28.259971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.109 [2024-07-12 11:42:28.366442] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@863 -- # return 0 00:05:39.367 11:42:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:39.367 11:42:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:39.367 11:42:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:39.367 11:42:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:39.367 11:42:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:39.367 11:42:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.367 11:42:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.367 11:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.367 11:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.367 11:42:28 accel -- accel/accel.sh@75 -- # killprocess 812664 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@949 -- # '[' -z 812664 ']' 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@953 -- # kill -0 812664 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@954 -- # uname 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 812664 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 812664' 00:05:39.367 killing process with pid 812664 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@968 -- # kill 812664 00:05:39.367 11:42:28 accel -- common/autotest_common.sh@973 -- # wait 812664 00:05:39.932 11:42:29 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:39.932 11:42:29 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.932 11:42:29 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:39.932 11:42:29 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:39.932 11:42:29 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.932 11:42:29 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:39.932 11:42:29 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.932 11:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.932 ************************************ 00:05:39.932 START TEST accel_missing_filename 00:05:39.932 ************************************ 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:39.932 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:39.932 11:42:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:39.932 [2024-07-12 11:42:29.270185] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:39.932 [2024-07-12 11:42:29.270264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812817 ] 00:05:39.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.932 [2024-07-12 11:42:29.335585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.191 [2024-07-12 11:42:29.454671] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.191 [2024-07-12 11:42:29.516716] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.191 [2024-07-12 11:42:29.605057] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:40.449 A filename is required. 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:40.449 00:05:40.449 real 0m0.480s 00:05:40.449 user 0m0.375s 00:05:40.449 sys 0m0.140s 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.449 11:42:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:40.449 ************************************ 00:05:40.449 END TEST accel_missing_filename 00:05:40.449 ************************************ 00:05:40.449 11:42:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.449 11:42:29 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:05:40.449 11:42:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.449 11:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.449 ************************************ 00:05:40.449 START TEST accel_compress_verify 00:05:40.449 ************************************ 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.449 11:42:29 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.449 11:42:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.449 [2024-07-12 11:42:29.799293] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:40.449 [2024-07-12 11:42:29.799356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812944 ] 00:05:40.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.449 [2024-07-12 11:42:29.861027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.706 [2024-07-12 11:42:29.980906] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.706 [2024-07-12 11:42:30.043221] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.706 [2024-07-12 11:42:30.124178] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:40.965 00:05:40.965 Compression does not support the verify option, aborting. 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:40.965 00:05:40.965 real 0m0.462s 00:05:40.965 user 0m0.343s 00:05:40.965 sys 0m0.153s 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.965 11:42:30 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:40.965 ************************************ 00:05:40.965 END TEST accel_compress_verify 00:05:40.965 ************************************ 00:05:40.965 11:42:30 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:40.965 11:42:30 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:40.965 11:42:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.965 11:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.965 ************************************ 00:05:40.965 START TEST accel_wrong_workload 00:05:40.965 ************************************ 00:05:40.965 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:05:40.965 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:40.965 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:40.966 11:42:30 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:40.966 Unsupported workload type: foobar 00:05:40.966 [2024-07-12 11:42:30.303991] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:40.966 accel_perf options: 00:05:40.966 [-h help message] 00:05:40.966 [-q queue depth per core] 00:05:40.966 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.966 [-T number of threads per core 00:05:40.966 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.966 [-t time in seconds] 00:05:40.966 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.966 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:40.966 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.966 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.966 [-S for crc32c workload, use this seed value (default 0) 00:05:40.966 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.966 [-f for fill workload, use this BYTE value (default 255) 00:05:40.966 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.966 [-y verify result if this switch is on] 00:05:40.966 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.966 Can be used to spread operations across a wider range of memory. 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:40.966 00:05:40.966 real 0m0.023s 00:05:40.966 user 0m0.015s 00:05:40.966 sys 0m0.007s 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.966 11:42:30 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:40.966 ************************************ 00:05:40.966 END TEST accel_wrong_workload 00:05:40.966 ************************************ 00:05:40.966 Error: writing output failed: Broken pipe 00:05:40.966 11:42:30 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.966 ************************************ 00:05:40.966 START TEST accel_negative_buffers 00:05:40.966 ************************************ 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:40.966 11:42:30 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:40.966 -x option must be non-negative. 00:05:40.966 [2024-07-12 11:42:30.372727] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:40.966 accel_perf options: 00:05:40.966 [-h help message] 00:05:40.966 [-q queue depth per core] 00:05:40.966 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.966 [-T number of threads per core 00:05:40.966 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.966 [-t time in seconds] 00:05:40.966 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.966 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:40.966 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.966 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.966 [-S for crc32c workload, use this seed value (default 0) 00:05:40.966 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.966 [-f for fill workload, use this BYTE value (default 255) 00:05:40.966 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.966 [-y verify result if this switch is on] 00:05:40.966 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.966 Can be used to spread operations across a wider range of memory. 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:40.966 00:05:40.966 real 0m0.022s 00:05:40.966 user 0m0.016s 00:05:40.966 sys 0m0.006s 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.966 11:42:30 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:40.966 ************************************ 00:05:40.966 END TEST accel_negative_buffers 00:05:40.966 ************************************ 00:05:40.966 Error: writing output failed: Broken pipe 00:05:40.966 11:42:30 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.966 11:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.966 ************************************ 00:05:40.966 START TEST accel_crc32c 00:05:40.966 ************************************ 00:05:40.966 11:42:30 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:40.966 11:42:30 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:40.966 [2024-07-12 11:42:30.437034] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:40.966 [2024-07-12 11:42:30.437097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813068 ] 00:05:41.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.225 [2024-07-12 11:42:30.499824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.225 [2024-07-12 11:42:30.620986] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.225 11:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.597 11:42:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.597 00:05:42.597 real 0m1.481s 00:05:42.597 user 0m1.342s 00:05:42.597 sys 0m0.141s 00:05:42.597 11:42:31 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.598 11:42:31 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.598 ************************************ 00:05:42.598 END TEST accel_crc32c 00:05:42.598 ************************************ 00:05:42.598 11:42:31 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:42.598 11:42:31 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:42.598 11:42:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.598 11:42:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.598 ************************************ 00:05:42.598 START TEST accel_crc32c_C2 00:05:42.598 ************************************ 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.598 11:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.598 [2024-07-12 11:42:31.968932] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:42.598 [2024-07-12 11:42:31.969004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813287 ] 00:05:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.598 [2024-07-12 11:42:32.031803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.855 [2024-07-12 11:42:32.148169] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.855 11:42:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.254 00:05:44.254 real 0m1.478s 00:05:44.254 user 0m1.332s 00:05:44.254 sys 0m0.147s 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.254 11:42:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.254 ************************************ 00:05:44.254 END TEST accel_crc32c_C2 00:05:44.254 ************************************ 00:05:44.254 11:42:33 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:44.254 11:42:33 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:44.254 11:42:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.254 11:42:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.254 ************************************ 00:05:44.254 START TEST accel_copy 00:05:44.254 ************************************ 00:05:44.254 11:42:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:44.254 11:42:33 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:44.254 [2024-07-12 11:42:33.490253] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:44.254 [2024-07-12 11:42:33.490322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813448 ] 00:05:44.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.254 [2024-07-12 11:42:33.554004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.254 [2024-07-12 11:42:33.671426] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.255 11:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:45.623 11:42:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.623 00:05:45.623 real 0m1.463s 00:05:45.623 user 0m1.324s 00:05:45.623 sys 0m0.140s 00:05:45.623 11:42:34 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:45.623 11:42:34 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.623 ************************************ 00:05:45.623 END TEST accel_copy 00:05:45.623 ************************************ 00:05:45.623 11:42:34 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.623 11:42:34 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:05:45.623 11:42:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.623 11:42:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.623 ************************************ 00:05:45.623 START TEST accel_fill 00:05:45.623 ************************************ 00:05:45.623 11:42:34 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:45.623 11:42:34 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:45.623 [2024-07-12 11:42:34.998956] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:45.623 [2024-07-12 11:42:34.999013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813704 ] 00:05:45.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.623 [2024-07-12 11:42:35.060940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.881 [2024-07-12 11:42:35.180440] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.881 11:42:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:47.254 11:42:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.254 00:05:47.254 real 0m1.477s 00:05:47.254 user 0m1.335s 00:05:47.254 sys 0m0.143s 00:05:47.254 11:42:36 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.254 11:42:36 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:47.254 ************************************ 00:05:47.254 END TEST accel_fill 00:05:47.254 ************************************ 00:05:47.254 11:42:36 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:47.254 11:42:36 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:47.254 11:42:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.254 11:42:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.254 ************************************ 00:05:47.254 START TEST accel_copy_crc32c 00:05:47.254 ************************************ 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:47.254 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:47.254 [2024-07-12 11:42:36.519856] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:47.254 [2024-07-12 11:42:36.519949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813881 ] 00:05:47.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.254 [2024-07-12 11:42:36.585923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.254 [2024-07-12 11:42:36.705449] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.513 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.514 11:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.887 00:05:48.887 real 0m1.482s 00:05:48.887 user 0m1.337s 00:05:48.887 sys 0m0.145s 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:48.887 11:42:37 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:48.887 ************************************ 00:05:48.887 END TEST accel_copy_crc32c 00:05:48.887 ************************************ 00:05:48.887 11:42:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.887 11:42:38 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:48.887 11:42:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.887 11:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.887 ************************************ 00:05:48.887 START TEST accel_copy_crc32c_C2 00:05:48.887 ************************************ 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:48.887 [2024-07-12 11:42:38.045199] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:48.887 [2024-07-12 11:42:38.045270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814036 ] 00:05:48.887 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.887 [2024-07-12 11:42:38.107292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.887 [2024-07-12 11:42:38.217453] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.887 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.888 11:42:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.262 00:05:50.262 real 0m1.455s 00:05:50.262 user 0m1.320s 00:05:50.262 sys 0m0.136s 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.262 11:42:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:50.262 ************************************ 00:05:50.262 END TEST accel_copy_crc32c_C2 00:05:50.262 ************************************ 00:05:50.262 11:42:39 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:50.262 11:42:39 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:50.262 11:42:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.262 11:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.262 ************************************ 00:05:50.262 START TEST accel_dualcast 00:05:50.262 ************************************ 00:05:50.262 11:42:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.262 11:42:39 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.263 11:42:39 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.263 11:42:39 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.263 11:42:39 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:50.263 11:42:39 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:50.263 [2024-07-12 11:42:39.543514] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:50.263 [2024-07-12 11:42:39.543576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814278 ] 00:05:50.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.263 [2024-07-12 11:42:39.606358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.263 [2024-07-12 11:42:39.725271] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.523 11:42:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:51.896 11:42:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.896 00:05:51.896 real 0m1.479s 00:05:51.896 user 0m1.337s 00:05:51.896 sys 0m0.143s 00:05:51.896 11:42:41 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.896 11:42:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 ************************************ 00:05:51.896 END TEST accel_dualcast 00:05:51.896 ************************************ 00:05:51.896 11:42:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:51.896 11:42:41 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:51.896 11:42:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.896 11:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 ************************************ 00:05:51.896 START TEST accel_compare 00:05:51.896 ************************************ 00:05:51.896 11:42:41 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:51.896 11:42:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:51.897 [2024-07-12 11:42:41.066511] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:51.897 [2024-07-12 11:42:41.066574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814469 ] 00:05:51.897 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.897 [2024-07-12 11:42:41.127919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.897 [2024-07-12 11:42:41.246553] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.897 11:42:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.271 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:53.272 11:42:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.272 00:05:53.272 real 0m1.473s 00:05:53.272 user 0m1.326s 00:05:53.272 sys 0m0.148s 00:05:53.272 11:42:42 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.272 11:42:42 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:53.272 ************************************ 00:05:53.272 END TEST accel_compare 00:05:53.272 ************************************ 00:05:53.272 11:42:42 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:53.272 11:42:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:53.272 11:42:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.272 11:42:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.272 ************************************ 00:05:53.272 START TEST accel_xor 00:05:53.272 ************************************ 00:05:53.272 11:42:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:53.272 11:42:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:53.272 [2024-07-12 11:42:42.584312] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:53.272 [2024-07-12 11:42:42.584379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814625 ] 00:05:53.272 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.272 [2024-07-12 11:42:42.645719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.531 [2024-07-12 11:42:42.767851] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.531 11:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.900 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.901 00:05:54.901 real 0m1.467s 00:05:54.901 user 0m1.318s 00:05:54.901 sys 0m0.150s 00:05:54.901 11:42:44 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.901 11:42:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:54.901 ************************************ 00:05:54.901 END TEST accel_xor 00:05:54.901 ************************************ 00:05:54.901 11:42:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:54.901 11:42:44 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:54.901 11:42:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.901 11:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.901 ************************************ 00:05:54.901 START TEST accel_xor 00:05:54.901 ************************************ 00:05:54.901 11:42:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.901 [2024-07-12 11:42:44.095661] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:54.901 [2024-07-12 11:42:44.095723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814904 ] 00:05:54.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.901 [2024-07-12 11:42:44.157042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.901 [2024-07-12 11:42:44.275769] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.901 11:42:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:56.302 11:42:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.302 00:05:56.302 real 0m1.472s 00:05:56.302 user 0m1.331s 00:05:56.302 sys 0m0.142s 00:05:56.302 11:42:45 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.302 11:42:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:56.302 ************************************ 00:05:56.302 END TEST accel_xor 00:05:56.302 ************************************ 00:05:56.302 11:42:45 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:56.302 11:42:45 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:56.302 11:42:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.302 11:42:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.302 ************************************ 00:05:56.302 START TEST accel_dif_verify 00:05:56.302 ************************************ 00:05:56.302 11:42:45 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:56.302 11:42:45 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:56.302 [2024-07-12 11:42:45.613799] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:56.302 [2024-07-12 11:42:45.613863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815056 ] 00:05:56.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.302 [2024-07-12 11:42:45.675668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.560 [2024-07-12 11:42:45.795059] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.560 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.561 11:42:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:57.933 11:42:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.933 00:05:57.933 real 0m1.478s 00:05:57.933 user 0m1.335s 00:05:57.933 sys 0m0.146s 00:05:57.933 11:42:47 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.933 11:42:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.933 ************************************ 00:05:57.933 END TEST accel_dif_verify 00:05:57.933 ************************************ 00:05:57.933 11:42:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:57.933 11:42:47 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:57.933 11:42:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.933 11:42:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.933 ************************************ 00:05:57.933 START TEST accel_dif_generate 00:05:57.933 ************************************ 00:05:57.933 11:42:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:57.933 [2024-07-12 11:42:47.137317] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:57.933 [2024-07-12 11:42:47.137379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815216 ] 00:05:57.933 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.933 [2024-07-12 11:42:47.199934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.933 [2024-07-12 11:42:47.317997] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.933 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.934 11:42:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:59.307 11:42:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.307 00:05:59.307 real 0m1.464s 00:05:59.307 user 0m1.321s 00:05:59.307 sys 0m0.146s 00:05:59.307 11:42:48 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.307 11:42:48 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:59.307 ************************************ 00:05:59.307 END TEST accel_dif_generate 00:05:59.307 ************************************ 00:05:59.307 11:42:48 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:59.307 11:42:48 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:59.307 11:42:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.307 11:42:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.307 ************************************ 00:05:59.307 START TEST accel_dif_generate_copy 00:05:59.307 ************************************ 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:59.307 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:59.307 [2024-07-12 11:42:48.648140] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:05:59.307 [2024-07-12 11:42:48.648211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815491 ] 00:05:59.307 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.307 [2024-07-12 11:42:48.712201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.566 [2024-07-12 11:42:48.832292] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.566 11:42:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:00.940 11:42:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.940 00:06:00.940 real 0m1.479s 00:06:00.941 user 0m1.326s 00:06:00.941 sys 0m0.155s 00:06:00.941 11:42:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.941 11:42:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.941 ************************************ 00:06:00.941 END TEST accel_dif_generate_copy 00:06:00.941 ************************************ 00:06:00.941 11:42:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:00.941 11:42:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.941 11:42:50 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:00.941 11:42:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.941 11:42:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.941 ************************************ 00:06:00.941 START TEST accel_comp 00:06:00.941 ************************************ 00:06:00.941 11:42:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:00.941 [2024-07-12 11:42:50.173720] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:00.941 [2024-07-12 11:42:50.173783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815651 ] 00:06:00.941 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.941 [2024-07-12 11:42:50.237941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.941 [2024-07-12 11:42:50.357245] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.941 11:42:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:02.314 11:42:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.314 00:06:02.314 real 0m1.482s 00:06:02.314 user 0m1.334s 00:06:02.314 sys 0m0.151s 00:06:02.314 11:42:51 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.314 11:42:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:02.314 ************************************ 00:06:02.314 END TEST accel_comp 00:06:02.314 ************************************ 00:06:02.314 11:42:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.314 11:42:51 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:02.314 11:42:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.314 11:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.314 ************************************ 00:06:02.314 START TEST accel_decomp 00:06:02.314 ************************************ 00:06:02.314 11:42:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:02.314 11:42:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:02.314 [2024-07-12 11:42:51.705193] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:02.314 [2024-07-12 11:42:51.705263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815808 ] 00:06:02.314 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.314 [2024-07-12 11:42:51.767580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.573 [2024-07-12 11:42:51.888389] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.573 11:42:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.574 11:42:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.947 11:42:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.947 00:06:03.947 real 0m1.475s 00:06:03.947 user 0m1.335s 00:06:03.947 sys 0m0.143s 00:06:03.947 11:42:53 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.947 11:42:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:03.947 ************************************ 00:06:03.947 END TEST accel_decomp 00:06:03.947 ************************************ 00:06:03.947 11:42:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.947 11:42:53 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:03.947 11:42:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.947 11:42:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.947 ************************************ 00:06:03.947 START TEST accel_decomp_full 00:06:03.947 ************************************ 00:06:03.947 11:42:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:03.947 11:42:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:03.947 [2024-07-12 11:42:53.225848] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:03.947 [2024-07-12 11:42:53.225923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816085 ] 00:06:03.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.947 [2024-07-12 11:42:53.295931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.947 [2024-07-12 11:42:53.428610] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.205 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.206 11:42:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.580 11:42:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.580 00:06:05.580 real 0m1.483s 00:06:05.580 user 0m1.331s 00:06:05.580 sys 0m0.153s 00:06:05.580 11:42:54 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.580 11:42:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:05.580 ************************************ 00:06:05.580 END TEST accel_decomp_full 00:06:05.580 ************************************ 00:06:05.580 11:42:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.580 11:42:54 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:05.580 11:42:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.580 11:42:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.580 ************************************ 00:06:05.580 START TEST accel_decomp_mcore 00:06:05.580 ************************************ 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:05.580 11:42:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:05.580 [2024-07-12 11:42:54.756751] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:05.580 [2024-07-12 11:42:54.756816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816243 ] 00:06:05.580 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.580 [2024-07-12 11:42:54.823778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.580 [2024-07-12 11:42:54.945802] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.580 [2024-07-12 11:42:54.945855] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.580 [2024-07-12 11:42:54.945975] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.580 [2024-07-12 11:42:54.945971] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.580 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.581 11:42:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.954 00:06:06.954 real 0m1.491s 00:06:06.954 user 0m4.803s 00:06:06.954 sys 0m0.151s 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.954 11:42:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 END TEST accel_decomp_mcore 00:06:06.954 ************************************ 00:06:06.954 11:42:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.954 11:42:56 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:06.954 11:42:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.954 11:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 START TEST accel_decomp_full_mcore 00:06:06.954 ************************************ 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:06.954 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:06.954 [2024-07-12 11:42:56.294302] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:06.954 [2024-07-12 11:42:56.294366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816399 ] 00:06:06.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.954 [2024-07-12 11:42:56.356211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.213 [2024-07-12 11:42:56.483589] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.213 [2024-07-12 11:42:56.483665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.213 [2024-07-12 11:42:56.483759] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.213 [2024-07-12 11:42:56.483762] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.214 11:42:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.587 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.587 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.587 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.587 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.588 00:06:08.588 real 0m1.496s 00:06:08.588 user 0m4.812s 00:06:08.588 sys 0m0.161s 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.588 11:42:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:08.588 ************************************ 00:06:08.588 END TEST accel_decomp_full_mcore 00:06:08.588 ************************************ 00:06:08.588 11:42:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.588 11:42:57 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:08.588 11:42:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.588 11:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.588 ************************************ 00:06:08.588 START TEST accel_decomp_mthread 00:06:08.588 ************************************ 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:08.588 11:42:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:08.588 [2024-07-12 11:42:57.838238] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:08.588 [2024-07-12 11:42:57.838302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816679 ] 00:06:08.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.588 [2024-07-12 11:42:57.901270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.588 [2024-07-12 11:42:58.018588] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.588 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.846 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 11:42:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.220 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.221 11:42:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.221 00:06:10.221 real 0m1.495s 00:06:10.221 user 0m1.351s 00:06:10.221 sys 0m0.148s 00:06:10.221 11:42:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.221 11:42:59 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 ************************************ 00:06:10.221 END TEST accel_decomp_mthread 00:06:10.221 ************************************ 00:06:10.221 11:42:59 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.221 11:42:59 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:10.221 11:42:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.221 11:42:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 ************************************ 00:06:10.221 START TEST accel_decomp_full_mthread 00:06:10.221 ************************************ 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:10.221 [2024-07-12 11:42:59.382086] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:10.221 [2024-07-12 11:42:59.382150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816836 ] 00:06:10.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.221 [2024-07-12 11:42:59.447079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.221 [2024-07-12 11:42:59.565344] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.221 11:42:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.594 00:06:11.594 real 0m1.494s 00:06:11.594 user 0m1.355s 00:06:11.594 sys 0m0.142s 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.594 11:43:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:11.594 ************************************ 00:06:11.594 END TEST accel_decomp_full_mthread 00:06:11.594 ************************************ 00:06:11.594 11:43:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:11.594 11:43:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.594 11:43:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:11.594 11:43:00 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:11.594 11:43:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.594 11:43:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.594 11:43:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.594 11:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.595 11:43:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.595 11:43:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.595 11:43:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.595 11:43:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:11.595 11:43:00 accel -- accel/accel.sh@41 -- # jq -r . 00:06:11.595 ************************************ 00:06:11.595 START TEST accel_dif_functional_tests 00:06:11.595 ************************************ 00:06:11.595 11:43:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.595 [2024-07-12 11:43:00.944355] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:11.595 [2024-07-12 11:43:00.944427] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817015 ] 00:06:11.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.595 [2024-07-12 11:43:01.006212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.853 [2024-07-12 11:43:01.123060] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.853 [2024-07-12 11:43:01.123115] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.853 [2024-07-12 11:43:01.123118] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.853 00:06:11.853 00:06:11.853 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.853 http://cunit.sourceforge.net/ 00:06:11.853 00:06:11.853 00:06:11.853 Suite: accel_dif 00:06:11.853 Test: verify: DIF generated, GUARD check ...passed 00:06:11.853 Test: verify: DIF generated, APPTAG check ...passed 00:06:11.853 Test: verify: DIF generated, REFTAG check ...passed 00:06:11.853 Test: verify: DIF not generated, GUARD check ...[2024-07-12 11:43:01.220098] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.853 passed 00:06:11.853 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 11:43:01.220182] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.853 passed 00:06:11.853 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 11:43:01.220214] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.853 passed 00:06:11.853 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:11.853 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 11:43:01.220274] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:11.853 passed 00:06:11.853 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:11.853 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:11.853 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:11.853 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 11:43:01.220409] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:11.853 passed 00:06:11.853 Test: verify copy: DIF generated, GUARD check ...passed 00:06:11.853 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:11.853 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:11.853 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 11:43:01.220569] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.853 passed 00:06:11.853 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 11:43:01.220604] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.853 passed 00:06:11.853 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 11:43:01.220637] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.853 passed 00:06:11.853 Test: generate copy: DIF generated, GUARD check ...passed 00:06:11.853 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:11.853 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:11.853 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:11.853 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:11.853 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:11.853 Test: generate copy: iovecs-len validate ...[2024-07-12 11:43:01.220882] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:11.853 passed 00:06:11.853 Test: generate copy: buffer alignment validate ...passed 00:06:11.853 00:06:11.853 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.853 suites 1 1 n/a 0 0 00:06:11.853 tests 26 26 26 0 0 00:06:11.853 asserts 115 115 115 0 n/a 00:06:11.853 00:06:11.853 Elapsed time = 0.003 seconds 00:06:12.111 00:06:12.111 real 0m0.576s 00:06:12.111 user 0m0.859s 00:06:12.111 sys 0m0.190s 00:06:12.111 11:43:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.111 11:43:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:12.111 ************************************ 00:06:12.111 END TEST accel_dif_functional_tests 00:06:12.111 ************************************ 00:06:12.111 00:06:12.111 real 0m33.405s 00:06:12.111 user 0m36.777s 00:06:12.111 sys 0m4.628s 00:06:12.111 11:43:01 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.111 11:43:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.111 ************************************ 00:06:12.111 END TEST accel 00:06:12.111 ************************************ 00:06:12.111 11:43:01 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.111 11:43:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.111 11:43:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.111 11:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.111 ************************************ 00:06:12.111 START TEST accel_rpc 00:06:12.111 ************************************ 00:06:12.111 11:43:01 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.111 * Looking for test storage... 00:06:12.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:12.370 11:43:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.370 11:43:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=817182 00:06:12.370 11:43:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:12.370 11:43:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 817182 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 817182 ']' 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.370 11:43:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.370 [2024-07-12 11:43:01.657440] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:12.370 [2024-07-12 11:43:01.657524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817182 ] 00:06:12.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.370 [2024-07-12 11:43:01.713026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.370 [2024-07-12 11:43:01.829059] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.303 11:43:02 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.303 11:43:02 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:13.303 11:43:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:13.303 11:43:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:13.303 11:43:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:13.303 11:43:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:13.303 11:43:02 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:13.303 11:43:02 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.303 11:43:02 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.303 11:43:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 ************************************ 00:06:13.303 START TEST accel_assign_opcode 00:06:13.303 ************************************ 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 [2024-07-12 11:43:02.611545] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 [2024-07-12 11:43:02.619550] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.303 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.560 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.560 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:13.560 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.560 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.560 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:13.561 11:43:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:13.561 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.561 software 00:06:13.561 00:06:13.561 real 0m0.296s 00:06:13.561 user 0m0.038s 00:06:13.561 sys 0m0.005s 00:06:13.561 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.561 11:43:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.561 ************************************ 00:06:13.561 END TEST accel_assign_opcode 00:06:13.561 ************************************ 00:06:13.561 11:43:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 817182 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 817182 ']' 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 817182 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 817182 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 817182' 00:06:13.561 killing process with pid 817182 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@968 -- # kill 817182 00:06:13.561 11:43:02 accel_rpc -- common/autotest_common.sh@973 -- # wait 817182 00:06:14.123 00:06:14.123 real 0m1.849s 00:06:14.123 user 0m1.972s 00:06:14.123 sys 0m0.444s 00:06:14.123 11:43:03 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.123 11:43:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.123 ************************************ 00:06:14.123 END TEST accel_rpc 00:06:14.123 ************************************ 00:06:14.123 11:43:03 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.123 11:43:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.123 11:43:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.123 11:43:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.123 ************************************ 00:06:14.123 START TEST app_cmdline 00:06:14.123 ************************************ 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.123 * Looking for test storage... 00:06:14.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.123 11:43:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.123 11:43:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=817515 00:06:14.123 11:43:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.123 11:43:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 817515 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 817515 ']' 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.123 11:43:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.123 [2024-07-12 11:43:03.552392] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:14.123 [2024-07-12 11:43:03.552488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817515 ] 00:06:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.123 [2024-07-12 11:43:03.611653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.380 [2024-07-12 11:43:03.718287] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.638 11:43:03 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.638 11:43:03 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:14.638 11:43:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.895 { 00:06:14.895 "version": "SPDK v24.09-pre git sha1 5e8e6dfc2", 00:06:14.895 "fields": { 00:06:14.895 "major": 24, 00:06:14.895 "minor": 9, 00:06:14.895 "patch": 0, 00:06:14.895 "suffix": "-pre", 00:06:14.895 "commit": "5e8e6dfc2" 00:06:14.895 } 00:06:14.895 } 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.895 11:43:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.895 11:43:04 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.896 11:43:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.896 11:43:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.896 11:43:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.896 11:43:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.896 11:43:04 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.154 request: 00:06:15.154 { 00:06:15.154 "method": "env_dpdk_get_mem_stats", 00:06:15.154 "req_id": 1 00:06:15.154 } 00:06:15.154 Got JSON-RPC error response 00:06:15.154 response: 00:06:15.154 { 00:06:15.154 "code": -32601, 00:06:15.154 "message": "Method not found" 00:06:15.154 } 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.154 11:43:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 817515 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 817515 ']' 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 817515 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 817515 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 817515' 00:06:15.154 killing process with pid 817515 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@968 -- # kill 817515 00:06:15.154 11:43:04 app_cmdline -- common/autotest_common.sh@973 -- # wait 817515 00:06:15.720 00:06:15.720 real 0m1.573s 00:06:15.720 user 0m1.883s 00:06:15.720 sys 0m0.466s 00:06:15.720 11:43:05 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.720 11:43:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 ************************************ 00:06:15.720 END TEST app_cmdline 00:06:15.720 ************************************ 00:06:15.720 11:43:05 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.720 11:43:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.720 11:43:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.720 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 ************************************ 00:06:15.720 START TEST version 00:06:15.720 ************************************ 00:06:15.720 11:43:05 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.720 * Looking for test storage... 00:06:15.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.720 11:43:05 version -- app/version.sh@17 -- # get_header_version major 00:06:15.720 11:43:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # cut -f2 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.720 11:43:05 version -- app/version.sh@17 -- # major=24 00:06:15.720 11:43:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:15.720 11:43:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # cut -f2 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.720 11:43:05 version -- app/version.sh@18 -- # minor=9 00:06:15.720 11:43:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.720 11:43:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # cut -f2 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.720 11:43:05 version -- app/version.sh@19 -- # patch=0 00:06:15.720 11:43:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.720 11:43:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # cut -f2 00:06:15.720 11:43:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.720 11:43:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.720 11:43:05 version -- app/version.sh@22 -- # version=24.9 00:06:15.720 11:43:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.720 11:43:05 version -- app/version.sh@28 -- # version=24.9rc0 00:06:15.720 11:43:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.720 11:43:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.720 11:43:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:15.720 11:43:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:15.720 00:06:15.720 real 0m0.102s 00:06:15.720 user 0m0.058s 00:06:15.720 sys 0m0.066s 00:06:15.720 11:43:05 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.720 11:43:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 ************************************ 00:06:15.720 END TEST version 00:06:15.720 ************************************ 00:06:15.720 11:43:05 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:15.720 11:43:05 -- spdk/autotest.sh@198 -- # uname -s 00:06:15.720 11:43:05 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:15.720 11:43:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.720 11:43:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.720 11:43:05 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:15.720 11:43:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:15.720 11:43:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:15.720 11:43:05 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:15.720 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 11:43:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:15.980 11:43:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:15.980 11:43:05 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:15.980 11:43:05 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:15.980 11:43:05 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:15.980 11:43:05 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:15.980 11:43:05 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.980 11:43:05 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:15.980 11:43:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.980 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 ************************************ 00:06:15.980 START TEST nvmf_tcp 00:06:15.980 ************************************ 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.980 * Looking for test storage... 00:06:15.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.980 11:43:05 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.980 11:43:05 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.980 11:43:05 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.980 11:43:05 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:15.980 11:43:05 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:15.980 11:43:05 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.980 11:43:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 ************************************ 00:06:15.980 START TEST nvmf_example 00:06:15.980 ************************************ 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.980 * Looking for test storage... 00:06:15.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:15.980 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.981 11:43:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:17.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:06:17.881 00:06:17.881 --- 10.0.0.2 ping statistics --- 00:06:17.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.881 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:06:17.881 00:06:17.881 --- 10.0.0.1 ping statistics --- 00:06:17.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.881 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=819415 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 819415 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 819415 ']' 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.881 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.882 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.882 11:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:18.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:19.072 11:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:19.072 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.338 Initializing NVMe Controllers 00:06:31.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:31.338 Initialization complete. Launching workers. 00:06:31.338 ======================================================== 00:06:31.338 Latency(us) 00:06:31.338 Device Information : IOPS MiB/s Average min max 00:06:31.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14890.85 58.17 4299.36 894.25 18594.58 00:06:31.338 ======================================================== 00:06:31.338 Total : 14890.85 58.17 4299.36 894.25 18594.58 00:06:31.338 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:31.338 rmmod nvme_tcp 00:06:31.338 rmmod nvme_fabrics 00:06:31.338 rmmod nvme_keyring 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 819415 ']' 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 819415 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 819415 ']' 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 819415 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 819415 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 819415' 00:06:31.338 killing process with pid 819415 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 819415 00:06:31.338 11:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 819415 00:06:31.338 nvmf threads initialize successfully 00:06:31.338 bdev subsystem init successfully 00:06:31.338 created a nvmf target service 00:06:31.338 create targets's poll groups done 00:06:31.338 all subsystems of target started 00:06:31.338 nvmf target is running 00:06:31.338 all subsystems of target stopped 00:06:31.338 destroy targets's poll groups done 00:06:31.338 destroyed the nvmf target service 00:06:31.338 bdev subsystem finish successfully 00:06:31.338 nvmf threads destroy successfully 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.338 11:43:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:31.597 00:06:31.597 real 0m15.753s 00:06:31.597 user 0m45.090s 00:06:31.597 sys 0m3.214s 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.597 11:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:31.597 ************************************ 00:06:31.597 END TEST nvmf_example 00:06:31.597 ************************************ 00:06:31.859 11:43:21 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.859 11:43:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:31.859 11:43:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.859 11:43:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.859 ************************************ 00:06:31.859 START TEST nvmf_filesystem 00:06:31.859 ************************************ 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.859 * Looking for test storage... 00:06:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:31.859 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:31.860 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:31.860 #define SPDK_CONFIG_H 00:06:31.860 #define SPDK_CONFIG_APPS 1 00:06:31.860 #define SPDK_CONFIG_ARCH native 00:06:31.860 #undef SPDK_CONFIG_ASAN 00:06:31.860 #undef SPDK_CONFIG_AVAHI 00:06:31.860 #undef SPDK_CONFIG_CET 00:06:31.860 #define SPDK_CONFIG_COVERAGE 1 00:06:31.860 #define SPDK_CONFIG_CROSS_PREFIX 00:06:31.860 #undef SPDK_CONFIG_CRYPTO 00:06:31.860 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:31.860 #undef SPDK_CONFIG_CUSTOMOCF 00:06:31.860 #undef SPDK_CONFIG_DAOS 00:06:31.860 #define SPDK_CONFIG_DAOS_DIR 00:06:31.860 #define SPDK_CONFIG_DEBUG 1 00:06:31.860 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:31.860 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:31.860 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:31.860 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:31.860 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:31.860 #undef SPDK_CONFIG_DPDK_UADK 00:06:31.860 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:31.860 #define SPDK_CONFIG_EXAMPLES 1 00:06:31.860 #undef SPDK_CONFIG_FC 00:06:31.860 #define SPDK_CONFIG_FC_PATH 00:06:31.860 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:31.860 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:31.860 #undef SPDK_CONFIG_FUSE 00:06:31.860 #undef SPDK_CONFIG_FUZZER 00:06:31.860 #define SPDK_CONFIG_FUZZER_LIB 00:06:31.860 #undef SPDK_CONFIG_GOLANG 00:06:31.860 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:31.860 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:31.860 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:31.860 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:31.860 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:31.860 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:31.860 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:31.860 #define SPDK_CONFIG_IDXD 1 00:06:31.860 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:31.860 #undef SPDK_CONFIG_IPSEC_MB 00:06:31.860 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:31.860 #define SPDK_CONFIG_ISAL 1 00:06:31.861 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:31.861 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:31.861 #define SPDK_CONFIG_LIBDIR 00:06:31.861 #undef SPDK_CONFIG_LTO 00:06:31.861 #define SPDK_CONFIG_MAX_LCORES 00:06:31.861 #define SPDK_CONFIG_NVME_CUSE 1 00:06:31.861 #undef SPDK_CONFIG_OCF 00:06:31.861 #define SPDK_CONFIG_OCF_PATH 00:06:31.861 #define SPDK_CONFIG_OPENSSL_PATH 00:06:31.861 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:31.861 #define SPDK_CONFIG_PGO_DIR 00:06:31.861 #undef SPDK_CONFIG_PGO_USE 00:06:31.861 #define SPDK_CONFIG_PREFIX /usr/local 00:06:31.861 #undef SPDK_CONFIG_RAID5F 00:06:31.861 #undef SPDK_CONFIG_RBD 00:06:31.861 #define SPDK_CONFIG_RDMA 1 00:06:31.861 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:31.861 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:31.861 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:31.861 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:31.861 #define SPDK_CONFIG_SHARED 1 00:06:31.861 #undef SPDK_CONFIG_SMA 00:06:31.861 #define SPDK_CONFIG_TESTS 1 00:06:31.861 #undef SPDK_CONFIG_TSAN 00:06:31.861 #define SPDK_CONFIG_UBLK 1 00:06:31.861 #define SPDK_CONFIG_UBSAN 1 00:06:31.861 #undef SPDK_CONFIG_UNIT_TESTS 00:06:31.861 #undef SPDK_CONFIG_URING 00:06:31.861 #define SPDK_CONFIG_URING_PATH 00:06:31.861 #undef SPDK_CONFIG_URING_ZNS 00:06:31.861 #undef SPDK_CONFIG_USDT 00:06:31.861 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:31.861 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:31.861 #define SPDK_CONFIG_VFIO_USER 1 00:06:31.861 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:31.861 #define SPDK_CONFIG_VHOST 1 00:06:31.861 #define SPDK_CONFIG_VIRTIO 1 00:06:31.861 #undef SPDK_CONFIG_VTUNE 00:06:31.861 #define SPDK_CONFIG_VTUNE_DIR 00:06:31.861 #define SPDK_CONFIG_WERROR 1 00:06:31.861 #define SPDK_CONFIG_WPDK_DIR 00:06:31.861 #undef SPDK_CONFIG_XNVME 00:06:31.861 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:31.861 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.862 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 821129 ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 821129 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.f2CFeA 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.f2CFeA/tests/target /tmp/spdk.f2CFeA 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56478404608 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5516304384 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.863 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993977344 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30997000192 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=356352 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:31.864 * Looking for test storage... 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56478404608 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7730896896 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.864 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:31.865 11:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:33.763 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:33.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:33.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:33.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:33.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.764 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:06:34.023 00:06:34.023 --- 10.0.0.2 ping statistics --- 00:06:34.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.023 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:06:34.023 00:06:34.023 --- 10.0.0.1 ping statistics --- 00:06:34.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.023 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.023 ************************************ 00:06:34.023 START TEST nvmf_filesystem_no_in_capsule 00:06:34.023 ************************************ 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=822754 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 822754 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 822754 ']' 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:34.023 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.023 [2024-07-12 11:43:23.474026] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:34.023 [2024-07-12 11:43:23.474123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.281 [2024-07-12 11:43:23.544511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.281 [2024-07-12 11:43:23.657443] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.282 [2024-07-12 11:43:23.657516] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.282 [2024-07-12 11:43:23.657530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.282 [2024-07-12 11:43:23.657555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.282 [2024-07-12 11:43:23.657569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.282 [2024-07-12 11:43:23.657650] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.282 [2024-07-12 11:43:23.657674] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.282 [2024-07-12 11:43:23.658028] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.282 [2024-07-12 11:43:23.658033] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 [2024-07-12 11:43:23.815769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 Malloc1 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 [2024-07-12 11:43:24.004332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:06:34.540 { 00:06:34.540 "name": "Malloc1", 00:06:34.540 "aliases": [ 00:06:34.540 "b411d58e-c7da-4ff9-a781-8d116cb4261b" 00:06:34.540 ], 00:06:34.540 "product_name": "Malloc disk", 00:06:34.540 "block_size": 512, 00:06:34.540 "num_blocks": 1048576, 00:06:34.540 "uuid": "b411d58e-c7da-4ff9-a781-8d116cb4261b", 00:06:34.540 "assigned_rate_limits": { 00:06:34.540 "rw_ios_per_sec": 0, 00:06:34.540 "rw_mbytes_per_sec": 0, 00:06:34.540 "r_mbytes_per_sec": 0, 00:06:34.540 "w_mbytes_per_sec": 0 00:06:34.540 }, 00:06:34.540 "claimed": true, 00:06:34.540 "claim_type": "exclusive_write", 00:06:34.540 "zoned": false, 00:06:34.540 "supported_io_types": { 00:06:34.540 "read": true, 00:06:34.540 "write": true, 00:06:34.540 "unmap": true, 00:06:34.540 "write_zeroes": true, 00:06:34.540 "flush": true, 00:06:34.540 "reset": true, 00:06:34.540 "compare": false, 00:06:34.540 "compare_and_write": false, 00:06:34.540 "abort": true, 00:06:34.540 "nvme_admin": false, 00:06:34.540 "nvme_io": false 00:06:34.540 }, 00:06:34.540 "memory_domains": [ 00:06:34.540 { 00:06:34.540 "dma_device_id": "system", 00:06:34.540 "dma_device_type": 1 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.540 "dma_device_type": 2 00:06:34.540 } 00:06:34.540 ], 00:06:34.540 "driver_specific": {} 00:06:34.540 } 00:06:34.540 ]' 00:06:34.540 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:34.798 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:35.362 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:35.362 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:06:35.362 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:06:35.362 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:06:35.362 11:43:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:37.258 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:37.516 11:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:38.449 11:43:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.381 ************************************ 00:06:39.381 START TEST filesystem_ext4 00:06:39.381 ************************************ 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:06:39.381 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:39.381 mke2fs 1.46.5 (30-Dec-2021) 00:06:39.381 Discarding device blocks: 0/522240 done 00:06:39.381 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:39.381 Filesystem UUID: 9557d1fa-9834-4476-b919-af2d66b7f97e 00:06:39.381 Superblock backups stored on blocks: 00:06:39.381 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:39.381 00:06:39.381 Allocating group tables: 0/64 done 00:06:39.381 Writing inode tables: 0/64 done 00:06:39.639 Creating journal (8192 blocks): done 00:06:39.639 Writing superblocks and filesystem accounting information: 0/64 done 00:06:39.639 00:06:39.639 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:06:39.639 11:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 822754 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:40.204 00:06:40.204 real 0m0.879s 00:06:40.204 user 0m0.019s 00:06:40.204 sys 0m0.053s 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:40.204 ************************************ 00:06:40.204 END TEST filesystem_ext4 00:06:40.204 ************************************ 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.204 ************************************ 00:06:40.204 START TEST filesystem_btrfs 00:06:40.204 ************************************ 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:40.204 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:06:40.205 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:40.462 btrfs-progs v6.6.2 00:06:40.462 See https://btrfs.readthedocs.io for more information. 00:06:40.462 00:06:40.462 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:40.462 NOTE: several default settings have changed in version 5.15, please make sure 00:06:40.462 this does not affect your deployments: 00:06:40.462 - DUP for metadata (-m dup) 00:06:40.462 - enabled no-holes (-O no-holes) 00:06:40.462 - enabled free-space-tree (-R free-space-tree) 00:06:40.462 00:06:40.462 Label: (null) 00:06:40.462 UUID: d44fa78a-20b1-479f-8532-5cf7446f720b 00:06:40.462 Node size: 16384 00:06:40.462 Sector size: 4096 00:06:40.462 Filesystem size: 510.00MiB 00:06:40.462 Block group profiles: 00:06:40.462 Data: single 8.00MiB 00:06:40.462 Metadata: DUP 32.00MiB 00:06:40.462 System: DUP 8.00MiB 00:06:40.462 SSD detected: yes 00:06:40.462 Zoned device: no 00:06:40.462 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:40.462 Runtime features: free-space-tree 00:06:40.462 Checksum: crc32c 00:06:40.462 Number of devices: 1 00:06:40.462 Devices: 00:06:40.462 ID SIZE PATH 00:06:40.462 1 510.00MiB /dev/nvme0n1p1 00:06:40.462 00:06:40.462 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:06:40.462 11:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 822754 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.026 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.284 00:06:41.284 real 0m0.877s 00:06:41.284 user 0m0.019s 00:06:41.284 sys 0m0.111s 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:41.284 ************************************ 00:06:41.284 END TEST filesystem_btrfs 00:06:41.284 ************************************ 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.284 ************************************ 00:06:41.284 START TEST filesystem_xfs 00:06:41.284 ************************************ 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:06:41.284 11:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:41.284 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:41.284 = sectsz=512 attr=2, projid32bit=1 00:06:41.284 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:41.284 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:41.284 data = bsize=4096 blocks=130560, imaxpct=25 00:06:41.284 = sunit=0 swidth=0 blks 00:06:41.284 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:41.284 log =internal log bsize=4096 blocks=16384, version=2 00:06:41.284 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:41.284 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:42.216 Discarding blocks...Done. 00:06:42.216 11:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:06:42.216 11:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 822754 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.741 00:06:44.741 real 0m3.417s 00:06:44.741 user 0m0.022s 00:06:44.741 sys 0m0.055s 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.741 11:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.741 ************************************ 00:06:44.741 END TEST filesystem_xfs 00:06:44.741 ************************************ 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:44.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 822754 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 822754 ']' 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 822754 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:44.741 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 822754 00:06:44.999 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:44.999 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:44.999 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 822754' 00:06:44.999 killing process with pid 822754 00:06:44.999 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 822754 00:06:44.999 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 822754 00:06:45.258 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:45.258 00:06:45.258 real 0m11.306s 00:06:45.258 user 0m43.237s 00:06:45.258 sys 0m1.682s 00:06:45.258 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.258 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.258 ************************************ 00:06:45.258 END TEST nvmf_filesystem_no_in_capsule 00:06:45.258 ************************************ 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.518 ************************************ 00:06:45.518 START TEST nvmf_filesystem_in_capsule 00:06:45.518 ************************************ 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=824311 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 824311 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 824311 ']' 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:45.518 11:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.518 [2024-07-12 11:43:34.840394] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:06:45.518 [2024-07-12 11:43:34.840485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.518 [2024-07-12 11:43:34.918728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.776 [2024-07-12 11:43:35.041637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.776 [2024-07-12 11:43:35.041690] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.777 [2024-07-12 11:43:35.041707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.777 [2024-07-12 11:43:35.041720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.777 [2024-07-12 11:43:35.041732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.777 [2024-07-12 11:43:35.041795] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.777 [2024-07-12 11:43:35.041826] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.777 [2024-07-12 11:43:35.041955] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.777 [2024-07-12 11:43:35.041961] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.777 [2024-07-12 11:43:35.195561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.777 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.035 Malloc1 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.035 [2024-07-12 11:43:35.380200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:06:46.035 { 00:06:46.035 "name": "Malloc1", 00:06:46.035 "aliases": [ 00:06:46.035 "9b9194eb-0936-49e3-b43d-81e2e6eb82d6" 00:06:46.035 ], 00:06:46.035 "product_name": "Malloc disk", 00:06:46.035 "block_size": 512, 00:06:46.035 "num_blocks": 1048576, 00:06:46.035 "uuid": "9b9194eb-0936-49e3-b43d-81e2e6eb82d6", 00:06:46.035 "assigned_rate_limits": { 00:06:46.035 "rw_ios_per_sec": 0, 00:06:46.035 "rw_mbytes_per_sec": 0, 00:06:46.035 "r_mbytes_per_sec": 0, 00:06:46.035 "w_mbytes_per_sec": 0 00:06:46.035 }, 00:06:46.035 "claimed": true, 00:06:46.035 "claim_type": "exclusive_write", 00:06:46.035 "zoned": false, 00:06:46.035 "supported_io_types": { 00:06:46.035 "read": true, 00:06:46.035 "write": true, 00:06:46.035 "unmap": true, 00:06:46.035 "write_zeroes": true, 00:06:46.035 "flush": true, 00:06:46.035 "reset": true, 00:06:46.035 "compare": false, 00:06:46.035 "compare_and_write": false, 00:06:46.035 "abort": true, 00:06:46.035 "nvme_admin": false, 00:06:46.035 "nvme_io": false 00:06:46.035 }, 00:06:46.035 "memory_domains": [ 00:06:46.035 { 00:06:46.035 "dma_device_id": "system", 00:06:46.035 "dma_device_type": 1 00:06:46.035 }, 00:06:46.035 { 00:06:46.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.035 "dma_device_type": 2 00:06:46.035 } 00:06:46.035 ], 00:06:46.035 "driver_specific": {} 00:06:46.035 } 00:06:46.035 ]' 00:06:46.035 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:46.036 11:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:46.602 11:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:46.602 11:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:06:46.602 11:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:06:46.602 11:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:06:46.602 11:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:49.163 11:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:49.729 11:43:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:50.661 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:50.661 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:50.661 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:50.661 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.661 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.919 ************************************ 00:06:50.919 START TEST filesystem_in_capsule_ext4 00:06:50.919 ************************************ 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:06:50.919 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:50.919 mke2fs 1.46.5 (30-Dec-2021) 00:06:50.919 Discarding device blocks: 0/522240 done 00:06:50.919 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:50.919 Filesystem UUID: e5505341-aac4-4d9b-a759-c5f815bf09e5 00:06:50.919 Superblock backups stored on blocks: 00:06:50.919 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:50.919 00:06:50.919 Allocating group tables: 0/64 done 00:06:50.919 Writing inode tables: 0/64 done 00:06:51.176 Creating journal (8192 blocks): done 00:06:51.455 Writing superblocks and filesystem accounting information: 0/64 done 00:06:51.455 00:06:51.455 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:06:51.455 11:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 824311 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.713 00:06:51.713 real 0m1.000s 00:06:51.713 user 0m0.021s 00:06:51.713 sys 0m0.053s 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:51.713 ************************************ 00:06:51.713 END TEST filesystem_in_capsule_ext4 00:06:51.713 ************************************ 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.713 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.971 ************************************ 00:06:51.971 START TEST filesystem_in_capsule_btrfs 00:06:51.971 ************************************ 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:51.971 btrfs-progs v6.6.2 00:06:51.971 See https://btrfs.readthedocs.io for more information. 00:06:51.971 00:06:51.971 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:51.971 NOTE: several default settings have changed in version 5.15, please make sure 00:06:51.971 this does not affect your deployments: 00:06:51.971 - DUP for metadata (-m dup) 00:06:51.971 - enabled no-holes (-O no-holes) 00:06:51.971 - enabled free-space-tree (-R free-space-tree) 00:06:51.971 00:06:51.971 Label: (null) 00:06:51.971 UUID: 18789c07-b273-4f65-b9b9-ed1ab21a2ea9 00:06:51.971 Node size: 16384 00:06:51.971 Sector size: 4096 00:06:51.971 Filesystem size: 510.00MiB 00:06:51.971 Block group profiles: 00:06:51.971 Data: single 8.00MiB 00:06:51.971 Metadata: DUP 32.00MiB 00:06:51.971 System: DUP 8.00MiB 00:06:51.971 SSD detected: yes 00:06:51.971 Zoned device: no 00:06:51.971 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:51.971 Runtime features: free-space-tree 00:06:51.971 Checksum: crc32c 00:06:51.971 Number of devices: 1 00:06:51.971 Devices: 00:06:51.971 ID SIZE PATH 00:06:51.971 1 510.00MiB /dev/nvme0n1p1 00:06:51.971 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:06:51.971 11:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 824311 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.902 00:06:52.902 real 0m1.139s 00:06:52.902 user 0m0.023s 00:06:52.902 sys 0m0.107s 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:52.902 ************************************ 00:06:52.902 END TEST filesystem_in_capsule_btrfs 00:06:52.902 ************************************ 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.902 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.167 ************************************ 00:06:53.167 START TEST filesystem_in_capsule_xfs 00:06:53.167 ************************************ 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:06:53.167 11:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:53.167 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:53.167 = sectsz=512 attr=2, projid32bit=1 00:06:53.168 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:53.168 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:53.168 data = bsize=4096 blocks=130560, imaxpct=25 00:06:53.168 = sunit=0 swidth=0 blks 00:06:53.168 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:53.168 log =internal log bsize=4096 blocks=16384, version=2 00:06:53.168 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:53.168 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:54.097 Discarding blocks...Done. 00:06:54.097 11:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:06:54.097 11:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 824311 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.622 00:06:56.622 real 0m3.196s 00:06:56.622 user 0m0.022s 00:06:56.622 sys 0m0.056s 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:56.622 ************************************ 00:06:56.622 END TEST filesystem_in_capsule_xfs 00:06:56.622 ************************************ 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:56.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:06:56.622 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 824311 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 824311 ']' 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 824311 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 824311 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 824311' 00:06:56.623 killing process with pid 824311 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 824311 00:06:56.623 11:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 824311 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:57.190 00:06:57.190 real 0m11.628s 00:06:57.190 user 0m44.394s 00:06:57.190 sys 0m1.745s 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.190 ************************************ 00:06:57.190 END TEST nvmf_filesystem_in_capsule 00:06:57.190 ************************************ 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.190 rmmod nvme_tcp 00:06:57.190 rmmod nvme_fabrics 00:06:57.190 rmmod nvme_keyring 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.190 11:43:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.112 11:43:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:59.112 00:06:59.112 real 0m27.427s 00:06:59.112 user 1m28.540s 00:06:59.112 sys 0m5.007s 00:06:59.112 11:43:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.112 11:43:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 END TEST nvmf_filesystem 00:06:59.112 ************************************ 00:06:59.112 11:43:48 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:59.112 11:43:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:59.112 11:43:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.112 11:43:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 START TEST nvmf_target_discovery 00:06:59.112 ************************************ 00:06:59.112 11:43:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:59.371 * Looking for test storage... 00:06:59.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.371 11:43:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:01.274 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:01.275 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:01.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:01.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:01.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:07:01.275 00:07:01.275 --- 10.0.0.2 ping statistics --- 00:07:01.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.275 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:07:01.275 00:07:01.275 --- 10.0.0.1 ping statistics --- 00:07:01.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.275 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=827783 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 827783 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 827783 ']' 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:01.275 11:43:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.534 [2024-07-12 11:43:50.781211] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:07:01.534 [2024-07-12 11:43:50.781303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.534 [2024-07-12 11:43:50.849971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.534 [2024-07-12 11:43:50.966299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.534 [2024-07-12 11:43:50.966362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.534 [2024-07-12 11:43:50.966378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.534 [2024-07-12 11:43:50.966391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.534 [2024-07-12 11:43:50.966402] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.534 [2024-07-12 11:43:50.966480] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.534 [2024-07-12 11:43:50.966551] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.534 [2024-07-12 11:43:50.966653] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.534 [2024-07-12 11:43:50.966650] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 [2024-07-12 11:43:51.785861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 Null1 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 [2024-07-12 11:43:51.826118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 Null2 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 Null3 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 Null4 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.469 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.470 11:43:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.470 11:43:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:02.728 00:07:02.728 Discovery Log Number of Records 6, Generation counter 6 00:07:02.728 =====Discovery Log Entry 0====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: current discovery subsystem 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4420 00:07:02.728 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: explicit discovery connections, duplicate discovery information 00:07:02.728 sectype: none 00:07:02.728 =====Discovery Log Entry 1====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: nvme subsystem 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4420 00:07:02.728 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: none 00:07:02.728 sectype: none 00:07:02.728 =====Discovery Log Entry 2====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: nvme subsystem 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4420 00:07:02.728 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: none 00:07:02.728 sectype: none 00:07:02.728 =====Discovery Log Entry 3====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: nvme subsystem 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4420 00:07:02.728 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: none 00:07:02.728 sectype: none 00:07:02.728 =====Discovery Log Entry 4====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: nvme subsystem 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4420 00:07:02.728 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: none 00:07:02.728 sectype: none 00:07:02.728 =====Discovery Log Entry 5====== 00:07:02.728 trtype: tcp 00:07:02.728 adrfam: ipv4 00:07:02.728 subtype: discovery subsystem referral 00:07:02.728 treq: not required 00:07:02.728 portid: 0 00:07:02.728 trsvcid: 4430 00:07:02.728 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:02.728 traddr: 10.0.0.2 00:07:02.728 eflags: none 00:07:02.728 sectype: none 00:07:02.728 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:02.728 Perform nvmf subsystem discovery via RPC 00:07:02.728 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:02.728 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.728 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.728 [ 00:07:02.728 { 00:07:02.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:02.728 "subtype": "Discovery", 00:07:02.728 "listen_addresses": [ 00:07:02.728 { 00:07:02.728 "trtype": "TCP", 00:07:02.728 "adrfam": "IPv4", 00:07:02.728 "traddr": "10.0.0.2", 00:07:02.728 "trsvcid": "4420" 00:07:02.728 } 00:07:02.728 ], 00:07:02.728 "allow_any_host": true, 00:07:02.728 "hosts": [] 00:07:02.728 }, 00:07:02.728 { 00:07:02.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:02.728 "subtype": "NVMe", 00:07:02.728 "listen_addresses": [ 00:07:02.728 { 00:07:02.728 "trtype": "TCP", 00:07:02.728 "adrfam": "IPv4", 00:07:02.728 "traddr": "10.0.0.2", 00:07:02.728 "trsvcid": "4420" 00:07:02.728 } 00:07:02.728 ], 00:07:02.728 "allow_any_host": true, 00:07:02.728 "hosts": [], 00:07:02.728 "serial_number": "SPDK00000000000001", 00:07:02.728 "model_number": "SPDK bdev Controller", 00:07:02.728 "max_namespaces": 32, 00:07:02.728 "min_cntlid": 1, 00:07:02.728 "max_cntlid": 65519, 00:07:02.728 "namespaces": [ 00:07:02.728 { 00:07:02.728 "nsid": 1, 00:07:02.728 "bdev_name": "Null1", 00:07:02.728 "name": "Null1", 00:07:02.728 "nguid": "7640ACAABD104881B588F73CA4407ED3", 00:07:02.728 "uuid": "7640acaa-bd10-4881-b588-f73ca4407ed3" 00:07:02.728 } 00:07:02.728 ] 00:07:02.728 }, 00:07:02.728 { 00:07:02.728 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:02.728 "subtype": "NVMe", 00:07:02.728 "listen_addresses": [ 00:07:02.728 { 00:07:02.728 "trtype": "TCP", 00:07:02.728 "adrfam": "IPv4", 00:07:02.728 "traddr": "10.0.0.2", 00:07:02.728 "trsvcid": "4420" 00:07:02.728 } 00:07:02.728 ], 00:07:02.728 "allow_any_host": true, 00:07:02.728 "hosts": [], 00:07:02.728 "serial_number": "SPDK00000000000002", 00:07:02.728 "model_number": "SPDK bdev Controller", 00:07:02.728 "max_namespaces": 32, 00:07:02.728 "min_cntlid": 1, 00:07:02.728 "max_cntlid": 65519, 00:07:02.728 "namespaces": [ 00:07:02.728 { 00:07:02.728 "nsid": 1, 00:07:02.728 "bdev_name": "Null2", 00:07:02.728 "name": "Null2", 00:07:02.729 "nguid": "F19DF825697D42DE80D176A314ED4FFD", 00:07:02.729 "uuid": "f19df825-697d-42de-80d1-76a314ed4ffd" 00:07:02.729 } 00:07:02.729 ] 00:07:02.729 }, 00:07:02.729 { 00:07:02.729 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:02.729 "subtype": "NVMe", 00:07:02.729 "listen_addresses": [ 00:07:02.729 { 00:07:02.729 "trtype": "TCP", 00:07:02.729 "adrfam": "IPv4", 00:07:02.729 "traddr": "10.0.0.2", 00:07:02.729 "trsvcid": "4420" 00:07:02.729 } 00:07:02.729 ], 00:07:02.729 "allow_any_host": true, 00:07:02.729 "hosts": [], 00:07:02.729 "serial_number": "SPDK00000000000003", 00:07:02.729 "model_number": "SPDK bdev Controller", 00:07:02.729 "max_namespaces": 32, 00:07:02.729 "min_cntlid": 1, 00:07:02.729 "max_cntlid": 65519, 00:07:02.729 "namespaces": [ 00:07:02.729 { 00:07:02.729 "nsid": 1, 00:07:02.729 "bdev_name": "Null3", 00:07:02.729 "name": "Null3", 00:07:02.729 "nguid": "E10A969FB08B4D399D0BE39DC7E091F1", 00:07:02.729 "uuid": "e10a969f-b08b-4d39-9d0b-e39dc7e091f1" 00:07:02.729 } 00:07:02.729 ] 00:07:02.729 }, 00:07:02.729 { 00:07:02.729 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:02.729 "subtype": "NVMe", 00:07:02.729 "listen_addresses": [ 00:07:02.729 { 00:07:02.729 "trtype": "TCP", 00:07:02.729 "adrfam": "IPv4", 00:07:02.729 "traddr": "10.0.0.2", 00:07:02.729 "trsvcid": "4420" 00:07:02.729 } 00:07:02.729 ], 00:07:02.729 "allow_any_host": true, 00:07:02.729 "hosts": [], 00:07:02.729 "serial_number": "SPDK00000000000004", 00:07:02.729 "model_number": "SPDK bdev Controller", 00:07:02.729 "max_namespaces": 32, 00:07:02.729 "min_cntlid": 1, 00:07:02.729 "max_cntlid": 65519, 00:07:02.729 "namespaces": [ 00:07:02.729 { 00:07:02.729 "nsid": 1, 00:07:02.729 "bdev_name": "Null4", 00:07:02.729 "name": "Null4", 00:07:02.729 "nguid": "81B9CF65C0294B4FA7F46F4BD8537BDB", 00:07:02.729 "uuid": "81b9cf65-c029-4b4f-a7f4-6f4bd8537bdb" 00:07:02.729 } 00:07:02.729 ] 00:07:02.729 } 00:07:02.729 ] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.729 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.729 rmmod nvme_tcp 00:07:02.729 rmmod nvme_fabrics 00:07:02.729 rmmod nvme_keyring 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 827783 ']' 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 827783 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 827783 ']' 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 827783 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 827783 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 827783' 00:07:02.988 killing process with pid 827783 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 827783 00:07:02.988 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 827783 00:07:03.247 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.247 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.247 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.247 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.248 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.248 11:43:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.248 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.248 11:43:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.150 11:43:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.150 00:07:05.150 real 0m5.972s 00:07:05.150 user 0m7.044s 00:07:05.150 sys 0m1.845s 00:07:05.150 11:43:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.150 11:43:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:05.150 ************************************ 00:07:05.150 END TEST nvmf_target_discovery 00:07:05.150 ************************************ 00:07:05.150 11:43:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:05.150 11:43:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:05.150 11:43:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.150 11:43:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.150 ************************************ 00:07:05.150 START TEST nvmf_referrals 00:07:05.150 ************************************ 00:07:05.150 11:43:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:05.409 * Looking for test storage... 00:07:05.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:05.409 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.410 11:43:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.311 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:07:07.312 00:07:07.312 --- 10.0.0.2 ping statistics --- 00:07:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.312 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:07:07.312 00:07:07.312 --- 10.0.0.1 ping statistics --- 00:07:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.312 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=829882 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 829882 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 829882 ']' 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.312 11:43:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.312 [2024-07-12 11:43:56.753518] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:07:07.312 [2024-07-12 11:43:56.753602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.570 [2024-07-12 11:43:56.827279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.570 [2024-07-12 11:43:56.945675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.570 [2024-07-12 11:43:56.945735] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.570 [2024-07-12 11:43:56.945752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.570 [2024-07-12 11:43:56.945766] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.570 [2024-07-12 11:43:56.945778] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.570 [2024-07-12 11:43:56.945893] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.570 [2024-07-12 11:43:56.945939] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.570 [2024-07-12 11:43:56.946034] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.570 [2024-07-12 11:43:56.946037] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 [2024-07-12 11:43:57.739829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 [2024-07-12 11:43:57.752042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.502 11:43:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.760 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.026 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:09.026 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.027 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:09.328 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.586 11:43:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.586 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:09.843 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:09.843 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:09.843 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.844 rmmod nvme_tcp 00:07:09.844 rmmod nvme_fabrics 00:07:09.844 rmmod nvme_keyring 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 829882 ']' 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 829882 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 829882 ']' 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 829882 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:07:09.844 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 829882 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 829882' 00:07:10.102 killing process with pid 829882 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 829882 00:07:10.102 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 829882 00:07:10.360 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.361 11:43:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.265 11:44:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:12.265 00:07:12.265 real 0m7.070s 00:07:12.265 user 0m12.190s 00:07:12.265 sys 0m2.073s 00:07:12.265 11:44:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.265 11:44:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:12.265 ************************************ 00:07:12.265 END TEST nvmf_referrals 00:07:12.265 ************************************ 00:07:12.265 11:44:01 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:12.265 11:44:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:12.265 11:44:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.265 11:44:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.265 ************************************ 00:07:12.265 START TEST nvmf_connect_disconnect 00:07:12.265 ************************************ 00:07:12.265 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:12.525 * Looking for test storage... 00:07:12.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.525 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.526 11:44:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.425 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.425 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.425 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.426 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.683 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.683 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.683 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:14.683 00:07:14.683 --- 10.0.0.2 ping statistics --- 00:07:14.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.683 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:14.683 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:14.683 00:07:14.683 --- 10.0.0.1 ping statistics --- 00:07:14.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.684 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=832302 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 832302 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 832302 ']' 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:14.684 11:44:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.684 [2024-07-12 11:44:04.035464] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:07:14.684 [2024-07-12 11:44:04.035562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.684 [2024-07-12 11:44:04.107254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.941 [2024-07-12 11:44:04.223959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.941 [2024-07-12 11:44:04.224022] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.941 [2024-07-12 11:44:04.224048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.941 [2024-07-12 11:44:04.224061] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.941 [2024-07-12 11:44:04.224073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.941 [2024-07-12 11:44:04.224155] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.941 [2024-07-12 11:44:04.224224] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.941 [2024-07-12 11:44:04.224354] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.941 [2024-07-12 11:44:04.224357] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 [2024-07-12 11:44:05.044039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 [2024-07-12 11:44:05.105505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:15.874 11:44:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:19.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.005 rmmod nvme_tcp 00:07:30.005 rmmod nvme_fabrics 00:07:30.005 rmmod nvme_keyring 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 832302 ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 832302 ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 832302' 00:07:30.005 killing process with pid 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 832302 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.005 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.006 11:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.563 11:44:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.563 00:07:32.563 real 0m19.696s 00:07:32.563 user 1m0.171s 00:07:32.563 sys 0m3.360s 00:07:32.563 11:44:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.563 11:44:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.563 ************************************ 00:07:32.563 END TEST nvmf_connect_disconnect 00:07:32.563 ************************************ 00:07:32.563 11:44:21 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:32.563 11:44:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:32.563 11:44:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.563 11:44:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.563 ************************************ 00:07:32.563 START TEST nvmf_multitarget 00:07:32.563 ************************************ 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:32.563 * Looking for test storage... 00:07:32.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.563 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.564 11:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.015 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:34.274 00:07:34.274 --- 10.0.0.2 ping statistics --- 00:07:34.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.274 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:07:34.274 00:07:34.274 --- 10.0.0.1 ping statistics --- 00:07:34.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.274 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=836075 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 836075 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 836075 ']' 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:34.274 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:34.274 [2024-07-12 11:44:23.658478] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:07:34.274 [2024-07-12 11:44:23.658557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.275 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.275 [2024-07-12 11:44:23.724099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.532 [2024-07-12 11:44:23.837761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.532 [2024-07-12 11:44:23.837835] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.532 [2024-07-12 11:44:23.837848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.532 [2024-07-12 11:44:23.837877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.532 [2024-07-12 11:44:23.837889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.532 [2024-07-12 11:44:23.837983] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.532 [2024-07-12 11:44:23.838067] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.532 [2024-07-12 11:44:23.838042] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.532 [2024-07-12 11:44:23.838069] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:34.532 11:44:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:34.789 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:34.789 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:34.789 "nvmf_tgt_1" 00:07:34.789 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:35.047 "nvmf_tgt_2" 00:07:35.047 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:35.047 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:35.047 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:35.047 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:35.047 true 00:07:35.047 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:35.306 true 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:35.306 rmmod nvme_tcp 00:07:35.306 rmmod nvme_fabrics 00:07:35.306 rmmod nvme_keyring 00:07:35.306 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 836075 ']' 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 836075 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 836075 ']' 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 836075 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 836075 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 836075' 00:07:35.566 killing process with pid 836075 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 836075 00:07:35.566 11:44:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 836075 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.827 11:44:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.735 11:44:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.735 00:07:37.735 real 0m5.670s 00:07:37.735 user 0m6.361s 00:07:37.735 sys 0m1.851s 00:07:37.735 11:44:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.735 11:44:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:37.735 ************************************ 00:07:37.735 END TEST nvmf_multitarget 00:07:37.735 ************************************ 00:07:37.735 11:44:27 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:37.735 11:44:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:37.735 11:44:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.735 11:44:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.735 ************************************ 00:07:37.735 START TEST nvmf_rpc 00:07:37.735 ************************************ 00:07:37.735 11:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:37.995 * Looking for test storage... 00:07:37.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.995 11:44:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.996 11:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:39.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:39.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.915 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:39.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:39.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:07:39.916 00:07:39.916 --- 10.0.0.2 ping statistics --- 00:07:39.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.916 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:39.916 00:07:39.916 --- 10.0.0.1 ping statistics --- 00:07:39.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.916 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=838176 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 838176 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 838176 ']' 00:07:39.916 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.917 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:39.917 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.917 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:39.917 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.917 [2024-07-12 11:44:29.399021] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:07:39.917 [2024-07-12 11:44:29.399102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.178 [2024-07-12 11:44:29.464941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.178 [2024-07-12 11:44:29.577198] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.178 [2024-07-12 11:44:29.577270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.178 [2024-07-12 11:44:29.577283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.178 [2024-07-12 11:44:29.577295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.178 [2024-07-12 11:44:29.577311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.178 [2024-07-12 11:44:29.577401] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.178 [2024-07-12 11:44:29.577466] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.178 [2024-07-12 11:44:29.577533] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.178 [2024-07-12 11:44:29.577535] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:40.437 "tick_rate": 2700000000, 00:07:40.437 "poll_groups": [ 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_000", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_001", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_002", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_003", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [] 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 }' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 [2024-07-12 11:44:29.820861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:40.437 "tick_rate": 2700000000, 00:07:40.437 "poll_groups": [ 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_000", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [ 00:07:40.437 { 00:07:40.437 "trtype": "TCP" 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_001", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [ 00:07:40.437 { 00:07:40.437 "trtype": "TCP" 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_002", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [ 00:07:40.437 { 00:07:40.437 "trtype": "TCP" 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 }, 00:07:40.437 { 00:07:40.437 "name": "nvmf_tgt_poll_group_003", 00:07:40.437 "admin_qpairs": 0, 00:07:40.437 "io_qpairs": 0, 00:07:40.437 "current_admin_qpairs": 0, 00:07:40.437 "current_io_qpairs": 0, 00:07:40.437 "pending_bdev_io": 0, 00:07:40.437 "completed_nvme_io": 0, 00:07:40.437 "transports": [ 00:07:40.437 { 00:07:40.437 "trtype": "TCP" 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 } 00:07:40.437 ] 00:07:40.437 }' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.437 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 Malloc1 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.697 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 [2024-07-12 11:44:29.969428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:40.698 11:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:40.698 [2024-07-12 11:44:29.991801] ctrlr.c: 817:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:40.698 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:40.698 could not add new controller: failed to write to nvme-fabrics device 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.698 11:44:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.268 11:44:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.268 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:41.268 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.268 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:41.268 11:44:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.805 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.806 [2024-07-12 11:44:32.811329] ctrlr.c: 817:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:43.806 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:43.806 could not add new controller: failed to write to nvme-fabrics device 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.806 11:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.065 11:44:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.065 11:44:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:44.065 11:44:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.065 11:44:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:44.065 11:44:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:45.966 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 [2024-07-12 11:44:35.548357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.226 11:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.795 11:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.795 11:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:46.795 11:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.795 11:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:46.795 11:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:48.699 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 [2024-07-12 11:44:38.311166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.958 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.524 11:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.525 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:49.525 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.525 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:49.525 11:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:52.061 11:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 [2024-07-12 11:44:41.082235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.061 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:52.321 11:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.321 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:52.321 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.321 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:52.321 11:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 [2024-07-12 11:44:43.892227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.860 11:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.118 11:44:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.118 11:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:55.118 11:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.118 11:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:55.118 11:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:07:57.073 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 [2024-07-12 11:44:46.626881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.331 11:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.900 11:44:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.900 11:44:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:07:57.900 11:44:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.900 11:44:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:57.900 11:44:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.438 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 [2024-07-12 11:44:49.487038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 [2024-07-12 11:44:49.535111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 [2024-07-12 11:44:49.583291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.439 [2024-07-12 11:44:49.631437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.439 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 [2024-07-12 11:44:49.679592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:00.440 "tick_rate": 2700000000, 00:08:00.440 "poll_groups": [ 00:08:00.440 { 00:08:00.440 "name": "nvmf_tgt_poll_group_000", 00:08:00.440 "admin_qpairs": 2, 00:08:00.440 "io_qpairs": 84, 00:08:00.440 "current_admin_qpairs": 0, 00:08:00.440 "current_io_qpairs": 0, 00:08:00.440 "pending_bdev_io": 0, 00:08:00.440 "completed_nvme_io": 165, 00:08:00.440 "transports": [ 00:08:00.440 { 00:08:00.440 "trtype": "TCP" 00:08:00.440 } 00:08:00.440 ] 00:08:00.440 }, 00:08:00.440 { 00:08:00.440 "name": "nvmf_tgt_poll_group_001", 00:08:00.440 "admin_qpairs": 2, 00:08:00.440 "io_qpairs": 84, 00:08:00.440 "current_admin_qpairs": 0, 00:08:00.440 "current_io_qpairs": 0, 00:08:00.440 "pending_bdev_io": 0, 00:08:00.440 "completed_nvme_io": 154, 00:08:00.440 "transports": [ 00:08:00.440 { 00:08:00.440 "trtype": "TCP" 00:08:00.440 } 00:08:00.440 ] 00:08:00.440 }, 00:08:00.440 { 00:08:00.440 "name": "nvmf_tgt_poll_group_002", 00:08:00.440 "admin_qpairs": 1, 00:08:00.440 "io_qpairs": 84, 00:08:00.440 "current_admin_qpairs": 0, 00:08:00.440 "current_io_qpairs": 0, 00:08:00.440 "pending_bdev_io": 0, 00:08:00.440 "completed_nvme_io": 184, 00:08:00.440 "transports": [ 00:08:00.440 { 00:08:00.440 "trtype": "TCP" 00:08:00.440 } 00:08:00.440 ] 00:08:00.440 }, 00:08:00.440 { 00:08:00.440 "name": "nvmf_tgt_poll_group_003", 00:08:00.440 "admin_qpairs": 2, 00:08:00.440 "io_qpairs": 84, 00:08:00.440 "current_admin_qpairs": 0, 00:08:00.440 "current_io_qpairs": 0, 00:08:00.440 "pending_bdev_io": 0, 00:08:00.440 "completed_nvme_io": 183, 00:08:00.440 "transports": [ 00:08:00.440 { 00:08:00.440 "trtype": "TCP" 00:08:00.440 } 00:08:00.440 ] 00:08:00.440 } 00:08:00.440 ] 00:08:00.440 }' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.440 rmmod nvme_tcp 00:08:00.440 rmmod nvme_fabrics 00:08:00.440 rmmod nvme_keyring 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 838176 ']' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 838176 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 838176 ']' 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 838176 00:08:00.440 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 838176 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 838176' 00:08:00.441 killing process with pid 838176 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 838176 00:08:00.441 11:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 838176 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.010 11:44:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.916 11:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.916 00:08:02.916 real 0m25.059s 00:08:02.916 user 1m21.650s 00:08:02.916 sys 0m3.912s 00:08:02.916 11:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.916 11:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.916 ************************************ 00:08:02.916 END TEST nvmf_rpc 00:08:02.916 ************************************ 00:08:02.916 11:44:52 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:02.916 11:44:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:02.916 11:44:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.916 11:44:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.916 ************************************ 00:08:02.916 START TEST nvmf_invalid 00:08:02.916 ************************************ 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:02.916 * Looking for test storage... 00:08:02.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:02.916 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.917 11:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.448 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:08:05.449 00:08:05.449 --- 10.0.0.2 ping statistics --- 00:08:05.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.449 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:08:05.449 00:08:05.449 --- 10.0.0.1 ping statistics --- 00:08:05.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.449 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=842679 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 842679 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 842679 ']' 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:05.449 11:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:05.449 [2024-07-12 11:44:54.545323] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:08:05.449 [2024-07-12 11:44:54.545408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.449 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.449 [2024-07-12 11:44:54.610938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.449 [2024-07-12 11:44:54.725282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.449 [2024-07-12 11:44:54.725343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.449 [2024-07-12 11:44:54.725359] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.449 [2024-07-12 11:44:54.725371] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.449 [2024-07-12 11:44:54.725383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.449 [2024-07-12 11:44:54.725461] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.449 [2024-07-12 11:44:54.725532] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.449 [2024-07-12 11:44:54.725627] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.449 [2024-07-12 11:44:54.725630] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:06.017 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1830 00:08:06.275 [2024-07-12 11:44:55.749427] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:06.534 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:06.534 { 00:08:06.534 "nqn": "nqn.2016-06.io.spdk:cnode1830", 00:08:06.534 "tgt_name": "foobar", 00:08:06.534 "method": "nvmf_create_subsystem", 00:08:06.534 "req_id": 1 00:08:06.534 } 00:08:06.534 Got JSON-RPC error response 00:08:06.534 response: 00:08:06.534 { 00:08:06.534 "code": -32603, 00:08:06.534 "message": "Unable to find target foobar" 00:08:06.534 }' 00:08:06.534 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:06.534 { 00:08:06.534 "nqn": "nqn.2016-06.io.spdk:cnode1830", 00:08:06.534 "tgt_name": "foobar", 00:08:06.534 "method": "nvmf_create_subsystem", 00:08:06.534 "req_id": 1 00:08:06.534 } 00:08:06.534 Got JSON-RPC error response 00:08:06.534 response: 00:08:06.534 { 00:08:06.534 "code": -32603, 00:08:06.534 "message": "Unable to find target foobar" 00:08:06.534 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:06.534 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:06.534 11:44:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15635 00:08:06.534 [2024-07-12 11:44:55.986258] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15635: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:06.534 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:06.534 { 00:08:06.534 "nqn": "nqn.2016-06.io.spdk:cnode15635", 00:08:06.534 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:06.534 "method": "nvmf_create_subsystem", 00:08:06.534 "req_id": 1 00:08:06.534 } 00:08:06.534 Got JSON-RPC error response 00:08:06.534 response: 00:08:06.534 { 00:08:06.534 "code": -32602, 00:08:06.534 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:06.534 }' 00:08:06.534 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:06.534 { 00:08:06.534 "nqn": "nqn.2016-06.io.spdk:cnode15635", 00:08:06.534 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:06.534 "method": "nvmf_create_subsystem", 00:08:06.534 "req_id": 1 00:08:06.534 } 00:08:06.534 Got JSON-RPC error response 00:08:06.534 response: 00:08:06.534 { 00:08:06.534 "code": -32602, 00:08:06.534 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:06.534 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:06.534 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:06.534 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3505 00:08:06.792 [2024-07-12 11:44:56.239043] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3505: invalid model number 'SPDK_Controller' 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:06.792 { 00:08:06.792 "nqn": "nqn.2016-06.io.spdk:cnode3505", 00:08:06.792 "model_number": "SPDK_Controller\u001f", 00:08:06.792 "method": "nvmf_create_subsystem", 00:08:06.792 "req_id": 1 00:08:06.792 } 00:08:06.792 Got JSON-RPC error response 00:08:06.792 response: 00:08:06.792 { 00:08:06.792 "code": -32602, 00:08:06.792 "message": "Invalid MN SPDK_Controller\u001f" 00:08:06.792 }' 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:06.792 { 00:08:06.792 "nqn": "nqn.2016-06.io.spdk:cnode3505", 00:08:06.792 "model_number": "SPDK_Controller\u001f", 00:08:06.792 "method": "nvmf_create_subsystem", 00:08:06.792 "req_id": 1 00:08:06.792 } 00:08:06.792 Got JSON-RPC error response 00:08:06.792 response: 00:08:06.792 { 00:08:06.792 "code": -32602, 00:08:06.792 "message": "Invalid MN SPDK_Controller\u001f" 00:08:06.792 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.792 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:06.793 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'DJJ%p)$$|{x6FMsCbfVN{' 00:08:07.052 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'DJJ%p)$$|{x6FMsCbfVN{' nqn.2016-06.io.spdk:cnode11160 00:08:07.311 [2024-07-12 11:44:56.552069] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11160: invalid serial number 'DJJ%p)$$|{x6FMsCbfVN{' 00:08:07.311 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:07.311 { 00:08:07.311 "nqn": "nqn.2016-06.io.spdk:cnode11160", 00:08:07.311 "serial_number": "DJJ%p)$$|{x6FMsCbfVN{", 00:08:07.311 "method": "nvmf_create_subsystem", 00:08:07.311 "req_id": 1 00:08:07.311 } 00:08:07.311 Got JSON-RPC error response 00:08:07.311 response: 00:08:07.311 { 00:08:07.311 "code": -32602, 00:08:07.312 "message": "Invalid SN DJJ%p)$$|{x6FMsCbfVN{" 00:08:07.312 }' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:07.312 { 00:08:07.312 "nqn": "nqn.2016-06.io.spdk:cnode11160", 00:08:07.312 "serial_number": "DJJ%p)$$|{x6FMsCbfVN{", 00:08:07.312 "method": "nvmf_create_subsystem", 00:08:07.312 "req_id": 1 00:08:07.312 } 00:08:07.312 Got JSON-RPC error response 00:08:07.312 response: 00:08:07.312 { 00:08:07.312 "code": -32602, 00:08:07.312 "message": "Invalid SN DJJ%p)$$|{x6FMsCbfVN{" 00:08:07.312 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:07.312 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Rc*o`;L9hG`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+' 00:08:07.313 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Rc*o`;L9hG`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+' nqn.2016-06.io.spdk:cnode19620 00:08:07.572 [2024-07-12 11:44:56.953430] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19620: invalid model number 'Rc*o`;L9hG`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+' 00:08:07.572 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:07.572 { 00:08:07.572 "nqn": "nqn.2016-06.io.spdk:cnode19620", 00:08:07.572 "model_number": "Rc*o`;L9hG\u007f`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+", 00:08:07.572 "method": "nvmf_create_subsystem", 00:08:07.572 "req_id": 1 00:08:07.572 } 00:08:07.572 Got JSON-RPC error response 00:08:07.572 response: 00:08:07.572 { 00:08:07.572 "code": -32602, 00:08:07.572 "message": "Invalid MN Rc*o`;L9hG\u007f`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+" 00:08:07.572 }' 00:08:07.572 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:07.572 { 00:08:07.572 "nqn": "nqn.2016-06.io.spdk:cnode19620", 00:08:07.572 "model_number": "Rc*o`;L9hG\u007f`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+", 00:08:07.572 "method": "nvmf_create_subsystem", 00:08:07.572 "req_id": 1 00:08:07.572 } 00:08:07.572 Got JSON-RPC error response 00:08:07.572 response: 00:08:07.572 { 00:08:07.572 "code": -32602, 00:08:07.572 "message": "Invalid MN Rc*o`;L9hG\u007f`}m{d;>vSe`)rR}DK*c]8rUoxSTz~+" 00:08:07.572 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:07.572 11:44:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:07.830 [2024-07-12 11:44:57.206330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.830 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:08.089 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:08.089 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:08.089 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:08.089 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:08.089 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:08.346 [2024-07-12 11:44:57.699947] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:08.346 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:08.346 { 00:08:08.346 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:08.346 "listen_address": { 00:08:08.346 "trtype": "tcp", 00:08:08.346 "traddr": "", 00:08:08.346 "trsvcid": "4421" 00:08:08.346 }, 00:08:08.346 "method": "nvmf_subsystem_remove_listener", 00:08:08.346 "req_id": 1 00:08:08.346 } 00:08:08.346 Got JSON-RPC error response 00:08:08.346 response: 00:08:08.346 { 00:08:08.346 "code": -32602, 00:08:08.346 "message": "Invalid parameters" 00:08:08.346 }' 00:08:08.346 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:08.346 { 00:08:08.346 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:08.346 "listen_address": { 00:08:08.346 "trtype": "tcp", 00:08:08.346 "traddr": "", 00:08:08.346 "trsvcid": "4421" 00:08:08.346 }, 00:08:08.346 "method": "nvmf_subsystem_remove_listener", 00:08:08.346 "req_id": 1 00:08:08.346 } 00:08:08.346 Got JSON-RPC error response 00:08:08.346 response: 00:08:08.346 { 00:08:08.346 "code": -32602, 00:08:08.346 "message": "Invalid parameters" 00:08:08.346 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:08.346 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14284 -i 0 00:08:08.603 [2024-07-12 11:44:57.932720] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14284: invalid cntlid range [0-65519] 00:08:08.603 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:08.603 { 00:08:08.603 "nqn": "nqn.2016-06.io.spdk:cnode14284", 00:08:08.603 "min_cntlid": 0, 00:08:08.603 "method": "nvmf_create_subsystem", 00:08:08.603 "req_id": 1 00:08:08.603 } 00:08:08.603 Got JSON-RPC error response 00:08:08.603 response: 00:08:08.603 { 00:08:08.603 "code": -32602, 00:08:08.603 "message": "Invalid cntlid range [0-65519]" 00:08:08.603 }' 00:08:08.603 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:08.603 { 00:08:08.603 "nqn": "nqn.2016-06.io.spdk:cnode14284", 00:08:08.603 "min_cntlid": 0, 00:08:08.603 "method": "nvmf_create_subsystem", 00:08:08.603 "req_id": 1 00:08:08.603 } 00:08:08.603 Got JSON-RPC error response 00:08:08.603 response: 00:08:08.603 { 00:08:08.603 "code": -32602, 00:08:08.603 "message": "Invalid cntlid range [0-65519]" 00:08:08.603 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.603 11:44:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27292 -i 65520 00:08:08.861 [2024-07-12 11:44:58.181500] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27292: invalid cntlid range [65520-65519] 00:08:08.861 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:08.861 { 00:08:08.861 "nqn": "nqn.2016-06.io.spdk:cnode27292", 00:08:08.861 "min_cntlid": 65520, 00:08:08.861 "method": "nvmf_create_subsystem", 00:08:08.861 "req_id": 1 00:08:08.861 } 00:08:08.861 Got JSON-RPC error response 00:08:08.861 response: 00:08:08.861 { 00:08:08.861 "code": -32602, 00:08:08.861 "message": "Invalid cntlid range [65520-65519]" 00:08:08.861 }' 00:08:08.861 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:08.861 { 00:08:08.861 "nqn": "nqn.2016-06.io.spdk:cnode27292", 00:08:08.861 "min_cntlid": 65520, 00:08:08.861 "method": "nvmf_create_subsystem", 00:08:08.861 "req_id": 1 00:08:08.861 } 00:08:08.861 Got JSON-RPC error response 00:08:08.861 response: 00:08:08.861 { 00:08:08.861 "code": -32602, 00:08:08.861 "message": "Invalid cntlid range [65520-65519]" 00:08:08.861 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:08.861 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12539 -I 0 00:08:09.119 [2024-07-12 11:44:58.438340] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12539: invalid cntlid range [1-0] 00:08:09.119 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:09.119 { 00:08:09.119 "nqn": "nqn.2016-06.io.spdk:cnode12539", 00:08:09.119 "max_cntlid": 0, 00:08:09.119 "method": "nvmf_create_subsystem", 00:08:09.119 "req_id": 1 00:08:09.119 } 00:08:09.119 Got JSON-RPC error response 00:08:09.119 response: 00:08:09.119 { 00:08:09.119 "code": -32602, 00:08:09.119 "message": "Invalid cntlid range [1-0]" 00:08:09.119 }' 00:08:09.119 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:09.119 { 00:08:09.119 "nqn": "nqn.2016-06.io.spdk:cnode12539", 00:08:09.119 "max_cntlid": 0, 00:08:09.119 "method": "nvmf_create_subsystem", 00:08:09.119 "req_id": 1 00:08:09.119 } 00:08:09.119 Got JSON-RPC error response 00:08:09.119 response: 00:08:09.119 { 00:08:09.119 "code": -32602, 00:08:09.119 "message": "Invalid cntlid range [1-0]" 00:08:09.119 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:09.119 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8395 -I 65520 00:08:09.377 [2024-07-12 11:44:58.695165] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8395: invalid cntlid range [1-65520] 00:08:09.377 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:09.377 { 00:08:09.377 "nqn": "nqn.2016-06.io.spdk:cnode8395", 00:08:09.377 "max_cntlid": 65520, 00:08:09.377 "method": "nvmf_create_subsystem", 00:08:09.377 "req_id": 1 00:08:09.377 } 00:08:09.377 Got JSON-RPC error response 00:08:09.377 response: 00:08:09.377 { 00:08:09.377 "code": -32602, 00:08:09.377 "message": "Invalid cntlid range [1-65520]" 00:08:09.377 }' 00:08:09.377 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:09.377 { 00:08:09.377 "nqn": "nqn.2016-06.io.spdk:cnode8395", 00:08:09.377 "max_cntlid": 65520, 00:08:09.377 "method": "nvmf_create_subsystem", 00:08:09.377 "req_id": 1 00:08:09.377 } 00:08:09.377 Got JSON-RPC error response 00:08:09.377 response: 00:08:09.377 { 00:08:09.377 "code": -32602, 00:08:09.377 "message": "Invalid cntlid range [1-65520]" 00:08:09.377 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:09.377 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6449 -i 6 -I 5 00:08:09.635 [2024-07-12 11:44:58.940007] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6449: invalid cntlid range [6-5] 00:08:09.635 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:09.635 { 00:08:09.635 "nqn": "nqn.2016-06.io.spdk:cnode6449", 00:08:09.635 "min_cntlid": 6, 00:08:09.635 "max_cntlid": 5, 00:08:09.635 "method": "nvmf_create_subsystem", 00:08:09.635 "req_id": 1 00:08:09.635 } 00:08:09.635 Got JSON-RPC error response 00:08:09.635 response: 00:08:09.635 { 00:08:09.635 "code": -32602, 00:08:09.635 "message": "Invalid cntlid range [6-5]" 00:08:09.635 }' 00:08:09.635 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:09.635 { 00:08:09.635 "nqn": "nqn.2016-06.io.spdk:cnode6449", 00:08:09.635 "min_cntlid": 6, 00:08:09.635 "max_cntlid": 5, 00:08:09.635 "method": "nvmf_create_subsystem", 00:08:09.635 "req_id": 1 00:08:09.635 } 00:08:09.635 Got JSON-RPC error response 00:08:09.635 response: 00:08:09.635 { 00:08:09.635 "code": -32602, 00:08:09.635 "message": "Invalid cntlid range [6-5]" 00:08:09.635 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:09.635 11:44:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:09.635 { 00:08:09.635 "name": "foobar", 00:08:09.635 "method": "nvmf_delete_target", 00:08:09.635 "req_id": 1 00:08:09.635 } 00:08:09.635 Got JSON-RPC error response 00:08:09.635 response: 00:08:09.635 { 00:08:09.635 "code": -32602, 00:08:09.635 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:09.635 }' 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:09.635 { 00:08:09.635 "name": "foobar", 00:08:09.635 "method": "nvmf_delete_target", 00:08:09.635 "req_id": 1 00:08:09.635 } 00:08:09.635 Got JSON-RPC error response 00:08:09.635 response: 00:08:09.635 { 00:08:09.635 "code": -32602, 00:08:09.635 "message": "The specified target doesn't exist, cannot delete it." 00:08:09.635 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.635 rmmod nvme_tcp 00:08:09.635 rmmod nvme_fabrics 00:08:09.635 rmmod nvme_keyring 00:08:09.635 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 842679 ']' 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 842679 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 842679 ']' 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 842679 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 842679 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 842679' 00:08:09.894 killing process with pid 842679 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 842679 00:08:09.894 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 842679 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.153 11:44:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.062 11:45:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.062 00:08:12.062 real 0m9.157s 00:08:12.062 user 0m22.383s 00:08:12.062 sys 0m2.434s 00:08:12.062 11:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:12.062 11:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:12.062 ************************************ 00:08:12.062 END TEST nvmf_invalid 00:08:12.062 ************************************ 00:08:12.062 11:45:01 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:12.062 11:45:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:12.062 11:45:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:12.062 11:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.062 ************************************ 00:08:12.062 START TEST nvmf_abort 00:08:12.062 ************************************ 00:08:12.062 11:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:12.321 * Looking for test storage... 00:08:12.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.321 11:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.226 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:14.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:14.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:14.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:14.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:14.227 00:08:14.227 --- 10.0.0.2 ping statistics --- 00:08:14.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.227 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:14.227 00:08:14.227 --- 10.0.0.1 ping statistics --- 00:08:14.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.227 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.227 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=845434 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 845434 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 845434 ']' 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:14.228 11:45:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 [2024-07-12 11:45:03.731954] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:08:14.487 [2024-07-12 11:45:03.732029] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.487 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.487 [2024-07-12 11:45:03.801197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.487 [2024-07-12 11:45:03.920844] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.487 [2024-07-12 11:45:03.920908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.487 [2024-07-12 11:45:03.920925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.487 [2024-07-12 11:45:03.920938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.487 [2024-07-12 11:45:03.920949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.487 [2024-07-12 11:45:03.921055] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.487 [2024-07-12 11:45:03.921140] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.487 [2024-07-12 11:45:03.921142] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.423 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:15.423 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 [2024-07-12 11:45:04.699380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 Malloc0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 Delay0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 [2024-07-12 11:45:04.768585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.424 11:45:04 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.424 [2024-07-12 11:45:04.832804] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:18.000 Initializing NVMe Controllers 00:08:18.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:18.000 controller IO queue size 128 less than required 00:08:18.000 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:18.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:18.000 Initialization complete. Launching workers. 00:08:18.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34048 00:08:18.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34109, failed to submit 62 00:08:18.000 success 34052, unsuccess 57, failed 0 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.000 rmmod nvme_tcp 00:08:18.000 rmmod nvme_fabrics 00:08:18.000 rmmod nvme_keyring 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 845434 ']' 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 845434 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 845434 ']' 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 845434 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 845434 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 845434' 00:08:18.000 killing process with pid 845434 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 845434 00:08:18.000 11:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 845434 00:08:18.000 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.000 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.000 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.000 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.001 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.001 11:45:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.001 11:45:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.001 11:45:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.898 11:45:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.898 00:08:19.898 real 0m7.780s 00:08:19.898 user 0m12.288s 00:08:19.898 sys 0m2.463s 00:08:19.898 11:45:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.898 11:45:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.898 ************************************ 00:08:19.898 END TEST nvmf_abort 00:08:19.898 ************************************ 00:08:19.898 11:45:09 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:19.898 11:45:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:19.898 11:45:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.898 11:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.898 ************************************ 00:08:19.898 START TEST nvmf_ns_hotplug_stress 00:08:19.898 ************************************ 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:19.898 * Looking for test storage... 00:08:19.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.898 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.157 11:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.060 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:08:22.061 00:08:22.061 --- 10.0.0.2 ping statistics --- 00:08:22.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.061 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:08:22.061 00:08:22.061 --- 10.0.0.1 ping statistics --- 00:08:22.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.061 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=848292 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 848292 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 848292 ']' 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:22.061 11:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.319 [2024-07-12 11:45:11.577448] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:08:22.319 [2024-07-12 11:45:11.577522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.319 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.319 [2024-07-12 11:45:11.646495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.319 [2024-07-12 11:45:11.765251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.319 [2024-07-12 11:45:11.765302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.319 [2024-07-12 11:45:11.765332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.319 [2024-07-12 11:45:11.765344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.319 [2024-07-12 11:45:11.765354] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.319 [2024-07-12 11:45:11.765455] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.319 [2024-07-12 11:45:11.765579] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.319 [2024-07-12 11:45:11.765582] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:23.253 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:23.511 [2024-07-12 11:45:12.834481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.511 11:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:23.769 11:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.026 [2024-07-12 11:45:13.325546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.026 11:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.282 11:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:24.541 Malloc0 00:08:24.541 11:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:24.799 Delay0 00:08:24.799 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.057 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:25.316 NULL1 00:08:25.316 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:25.575 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=848720 00:08:25.575 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:25.575 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:25.575 11:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.576 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.511 Read completed with error (sct=0, sc=11) 00:08:26.511 11:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.769 11:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:26.769 11:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:27.027 true 00:08:27.027 11:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:27.027 11:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.963 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.221 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:28.221 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:28.480 true 00:08:28.480 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:28.480 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.480 11:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.738 11:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:28.738 11:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:28.996 true 00:08:28.996 11:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:28.996 11:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.935 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.192 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:30.192 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:30.448 true 00:08:30.448 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:30.448 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.706 11:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.963 11:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:30.963 11:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:31.220 true 00:08:31.220 11:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:31.220 11:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.156 11:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.414 11:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:32.414 11:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:32.673 true 00:08:32.673 11:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:32.673 11:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.931 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.931 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:32.931 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:33.188 true 00:08:33.188 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:33.188 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.446 11:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.702 11:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:33.702 11:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:33.959 true 00:08:33.959 11:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:33.959 11:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.337 11:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.337 11:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:35.337 11:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:35.596 true 00:08:35.596 11:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:35.596 11:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.561 11:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.561 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:36.561 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:36.820 true 00:08:37.080 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:37.081 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.339 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.339 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:37.339 11:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:37.596 true 00:08:37.596 11:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:37.596 11:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.531 11:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.788 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:38.788 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:39.046 true 00:08:39.046 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:39.046 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.303 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.560 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:39.560 11:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:39.817 true 00:08:39.817 11:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:39.818 11:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.189 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.189 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:41.189 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:41.446 true 00:08:41.446 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:41.446 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.704 11:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.961 11:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:41.961 11:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:41.961 true 00:08:42.220 11:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:42.220 11:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.153 11:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.411 11:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:43.411 11:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:43.669 true 00:08:43.669 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:43.669 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.926 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.183 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:44.183 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:44.440 true 00:08:44.440 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:44.440 11:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.372 11:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.629 11:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:45.629 11:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:45.886 true 00:08:45.886 11:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:45.886 11:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.143 11:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.399 11:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:46.399 11:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:46.656 true 00:08:46.656 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:46.656 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.913 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.170 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:47.170 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:47.427 true 00:08:47.427 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:47.427 11:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.360 11:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.617 11:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:48.617 11:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:48.874 true 00:08:48.874 11:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:48.874 11:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.807 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.064 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:50.064 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:50.322 true 00:08:50.322 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:50.322 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.580 11:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.837 11:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:50.837 11:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:51.106 true 00:08:51.106 11:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:51.106 11:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.040 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.040 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:52.040 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:52.298 true 00:08:52.298 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:52.298 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.555 11:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.812 11:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:52.813 11:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:53.070 true 00:08:53.070 11:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:53.070 11:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.038 11:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.296 11:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:54.296 11:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:54.296 true 00:08:54.296 11:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:54.296 11:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.553 11:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.810 11:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:54.810 11:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:55.067 true 00:08:55.067 11:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:55.068 11:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.002 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.002 Initializing NVMe Controllers 00:08:56.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.002 Controller IO queue size 128, less than required. 00:08:56.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.002 Controller IO queue size 128, less than required. 00:08:56.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:56.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:56.002 Initialization complete. Launching workers. 00:08:56.002 ======================================================== 00:08:56.002 Latency(us) 00:08:56.002 Device Information : IOPS MiB/s Average min max 00:08:56.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1273.07 0.62 57741.83 3396.75 1033375.78 00:08:56.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11706.57 5.72 10933.81 1806.90 447503.75 00:08:56.002 ======================================================== 00:08:56.002 Total : 12979.63 6.34 15524.83 1806.90 1033375.78 00:08:56.002 00:08:56.260 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:56.260 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:56.517 true 00:08:56.517 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848720 00:08:56.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (848720) - No such process 00:08:56.517 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 848720 00:08:56.517 11:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.775 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.033 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:57.033 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:57.033 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:57.033 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:57.033 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:57.291 null0 00:08:57.291 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:57.291 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:57.291 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:57.549 null1 00:08:57.549 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:57.549 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:57.549 11:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:57.806 null2 00:08:57.806 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:57.806 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:57.806 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:57.806 null3 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:58.064 null4 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.064 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:58.321 null5 00:08:58.321 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.321 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.321 11:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:58.579 null6 00:08:58.579 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.579 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.579 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:58.838 null7 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.838 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 852777 852778 852780 852782 852784 852786 852788 852790 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.839 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.097 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.356 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.615 11:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.873 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.130 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.130 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.131 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.388 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.389 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.646 11:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.904 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.905 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.905 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.163 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.421 11:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.680 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.938 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.196 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.454 11:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.712 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.970 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.228 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:03.486 11:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.744 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.002 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.260 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.261 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.261 rmmod nvme_tcp 00:09:04.520 rmmod nvme_fabrics 00:09:04.520 rmmod nvme_keyring 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 848292 ']' 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 848292 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 848292 ']' 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 848292 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 848292 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 848292' 00:09:04.520 killing process with pid 848292 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 848292 00:09:04.520 11:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 848292 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.778 11:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.683 11:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.683 00:09:06.683 real 0m46.833s 00:09:06.683 user 3m32.148s 00:09:06.683 sys 0m16.521s 00:09:06.683 11:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:06.683 11:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.683 ************************************ 00:09:06.683 END TEST nvmf_ns_hotplug_stress 00:09:06.683 ************************************ 00:09:06.941 11:45:56 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:06.941 11:45:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:06.941 11:45:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:06.941 11:45:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 ************************************ 00:09:06.941 START TEST nvmf_connect_stress 00:09:06.941 ************************************ 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:06.941 * Looking for test storage... 00:09:06.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.941 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.942 11:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.895 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:08.896 00:09:08.896 --- 10.0.0.2 ping statistics --- 00:09:08.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.896 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:08.896 00:09:08.896 --- 10.0.0.1 ping statistics --- 00:09:08.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.896 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:08.896 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=855532 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 855532 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 855532 ']' 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:09.153 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.153 [2024-07-12 11:45:58.435692] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:09:09.153 [2024-07-12 11:45:58.435779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.153 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.153 [2024-07-12 11:45:58.502457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.153 [2024-07-12 11:45:58.613680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.153 [2024-07-12 11:45:58.613752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.153 [2024-07-12 11:45:58.613780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.153 [2024-07-12 11:45:58.613791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.153 [2024-07-12 11:45:58.613801] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.153 [2024-07-12 11:45:58.613892] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.153 [2024-07-12 11:45:58.613960] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.153 [2024-07-12 11:45:58.613964] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.410 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 [2024-07-12 11:45:58.759289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 [2024-07-12 11:45:58.788031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 NULL1 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=855629 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.411 11:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.975 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.975 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:09.975 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.975 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.975 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.233 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.233 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:10.233 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.233 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.233 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.491 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.491 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:10.491 11:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.491 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.491 11:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.750 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:10.750 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.750 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.750 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.008 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.008 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:11.008 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.008 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.008 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.604 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.604 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:11.604 11:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.604 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.604 11:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.604 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.604 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:11.604 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.861 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.861 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.118 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:12.118 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:12.118 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.118 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:12.118 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.376 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:12.376 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:12.376 11:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.376 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:12.376 11:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.633 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:12.633 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:12.633 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.633 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:12.633 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.891 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:12.891 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:12.891 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.891 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:12.891 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.455 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.455 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:13.455 11:46:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.455 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.455 11:46:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.711 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:13.711 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.711 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.711 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.992 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.992 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:13.992 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.992 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.992 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:14.249 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:14.249 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.249 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:14.249 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.506 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:14.506 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:14.506 11:46:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.506 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:14.506 11:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.071 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:15.071 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.071 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.071 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.328 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.328 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:15.328 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.328 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.328 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.585 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.585 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:15.585 11:46:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.585 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.585 11:46:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.842 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.842 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:15.842 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.842 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.842 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.404 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.404 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:16.404 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.404 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.404 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.661 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.661 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:16.661 11:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.661 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.661 11:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.919 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.919 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:16.919 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.919 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.919 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.177 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.177 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:17.177 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.177 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.177 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.434 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:17.434 11:46:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.434 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.434 11:46:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.999 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.999 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:17.999 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.999 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.999 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.256 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.256 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:18.256 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.256 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.256 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.513 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.513 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:18.513 11:46:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.513 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.513 11:46:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.770 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:18.770 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.770 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.770 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.027 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.027 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:19.027 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:19.027 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.027 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.591 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.591 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:19.591 11:46:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:19.592 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.592 11:46:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.592 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 855629 00:09:19.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (855629) - No such process 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 855629 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.850 rmmod nvme_tcp 00:09:19.850 rmmod nvme_fabrics 00:09:19.850 rmmod nvme_keyring 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 855532 ']' 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 855532 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 855532 ']' 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 855532 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 855532 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 855532' 00:09:19.850 killing process with pid 855532 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 855532 00:09:19.850 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 855532 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.108 11:46:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.641 11:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.641 00:09:22.641 real 0m15.312s 00:09:22.641 user 0m38.403s 00:09:22.641 sys 0m5.822s 00:09:22.641 11:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.641 11:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.641 ************************************ 00:09:22.641 END TEST nvmf_connect_stress 00:09:22.641 ************************************ 00:09:22.641 11:46:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:22.641 11:46:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:22.641 11:46:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.641 11:46:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.641 ************************************ 00:09:22.641 START TEST nvmf_fused_ordering 00:09:22.641 ************************************ 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:22.641 * Looking for test storage... 00:09:22.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.641 11:46:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.642 11:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.545 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:09:24.546 00:09:24.546 --- 10.0.0.2 ping statistics --- 00:09:24.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.546 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:09:24.546 00:09:24.546 --- 10.0.0.1 ping statistics --- 00:09:24.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.546 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=858826 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 858826 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 858826 ']' 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:24.546 11:46:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 [2024-07-12 11:46:13.771852] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:09:24.546 [2024-07-12 11:46:13.771959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.546 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.546 [2024-07-12 11:46:13.836928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.546 [2024-07-12 11:46:13.946122] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.546 [2024-07-12 11:46:13.946180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.546 [2024-07-12 11:46:13.946208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.546 [2024-07-12 11:46:13.946220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.546 [2024-07-12 11:46:13.946230] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.546 [2024-07-12 11:46:13.946257] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.806 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.806 [2024-07-12 11:46:14.096339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.807 [2024-07-12 11:46:14.112522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.807 NULL1 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.807 11:46:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:24.807 [2024-07-12 11:46:14.157237] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:09:24.807 [2024-07-12 11:46:14.157279] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858852 ] 00:09:24.807 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.066 Attached to nqn.2016-06.io.spdk:cnode1 00:09:25.066 Namespace ID: 1 size: 1GB 00:09:25.066 fused_ordering(0) 00:09:25.066 fused_ordering(1) 00:09:25.066 fused_ordering(2) 00:09:25.066 fused_ordering(3) 00:09:25.066 fused_ordering(4) 00:09:25.066 fused_ordering(5) 00:09:25.066 fused_ordering(6) 00:09:25.066 fused_ordering(7) 00:09:25.066 fused_ordering(8) 00:09:25.066 fused_ordering(9) 00:09:25.066 fused_ordering(10) 00:09:25.066 fused_ordering(11) 00:09:25.066 fused_ordering(12) 00:09:25.066 fused_ordering(13) 00:09:25.066 fused_ordering(14) 00:09:25.066 fused_ordering(15) 00:09:25.066 fused_ordering(16) 00:09:25.066 fused_ordering(17) 00:09:25.066 fused_ordering(18) 00:09:25.066 fused_ordering(19) 00:09:25.066 fused_ordering(20) 00:09:25.066 fused_ordering(21) 00:09:25.066 fused_ordering(22) 00:09:25.066 fused_ordering(23) 00:09:25.066 fused_ordering(24) 00:09:25.066 fused_ordering(25) 00:09:25.066 fused_ordering(26) 00:09:25.066 fused_ordering(27) 00:09:25.066 fused_ordering(28) 00:09:25.066 fused_ordering(29) 00:09:25.066 fused_ordering(30) 00:09:25.066 fused_ordering(31) 00:09:25.066 fused_ordering(32) 00:09:25.066 fused_ordering(33) 00:09:25.066 fused_ordering(34) 00:09:25.066 fused_ordering(35) 00:09:25.066 fused_ordering(36) 00:09:25.066 fused_ordering(37) 00:09:25.066 fused_ordering(38) 00:09:25.066 fused_ordering(39) 00:09:25.066 fused_ordering(40) 00:09:25.066 fused_ordering(41) 00:09:25.066 fused_ordering(42) 00:09:25.066 fused_ordering(43) 00:09:25.066 fused_ordering(44) 00:09:25.066 fused_ordering(45) 00:09:25.066 fused_ordering(46) 00:09:25.066 fused_ordering(47) 00:09:25.066 fused_ordering(48) 00:09:25.066 fused_ordering(49) 00:09:25.066 fused_ordering(50) 00:09:25.066 fused_ordering(51) 00:09:25.066 fused_ordering(52) 00:09:25.066 fused_ordering(53) 00:09:25.066 fused_ordering(54) 00:09:25.067 fused_ordering(55) 00:09:25.067 fused_ordering(56) 00:09:25.067 fused_ordering(57) 00:09:25.067 fused_ordering(58) 00:09:25.067 fused_ordering(59) 00:09:25.067 fused_ordering(60) 00:09:25.067 fused_ordering(61) 00:09:25.067 fused_ordering(62) 00:09:25.067 fused_ordering(63) 00:09:25.067 fused_ordering(64) 00:09:25.067 fused_ordering(65) 00:09:25.067 fused_ordering(66) 00:09:25.067 fused_ordering(67) 00:09:25.067 fused_ordering(68) 00:09:25.067 fused_ordering(69) 00:09:25.067 fused_ordering(70) 00:09:25.067 fused_ordering(71) 00:09:25.067 fused_ordering(72) 00:09:25.067 fused_ordering(73) 00:09:25.067 fused_ordering(74) 00:09:25.067 fused_ordering(75) 00:09:25.067 fused_ordering(76) 00:09:25.067 fused_ordering(77) 00:09:25.067 fused_ordering(78) 00:09:25.067 fused_ordering(79) 00:09:25.067 fused_ordering(80) 00:09:25.067 fused_ordering(81) 00:09:25.067 fused_ordering(82) 00:09:25.067 fused_ordering(83) 00:09:25.067 fused_ordering(84) 00:09:25.067 fused_ordering(85) 00:09:25.067 fused_ordering(86) 00:09:25.067 fused_ordering(87) 00:09:25.067 fused_ordering(88) 00:09:25.067 fused_ordering(89) 00:09:25.067 fused_ordering(90) 00:09:25.067 fused_ordering(91) 00:09:25.067 fused_ordering(92) 00:09:25.067 fused_ordering(93) 00:09:25.067 fused_ordering(94) 00:09:25.067 fused_ordering(95) 00:09:25.067 fused_ordering(96) 00:09:25.067 fused_ordering(97) 00:09:25.067 fused_ordering(98) 00:09:25.067 fused_ordering(99) 00:09:25.067 fused_ordering(100) 00:09:25.067 fused_ordering(101) 00:09:25.067 fused_ordering(102) 00:09:25.067 fused_ordering(103) 00:09:25.067 fused_ordering(104) 00:09:25.067 fused_ordering(105) 00:09:25.067 fused_ordering(106) 00:09:25.067 fused_ordering(107) 00:09:25.067 fused_ordering(108) 00:09:25.067 fused_ordering(109) 00:09:25.067 fused_ordering(110) 00:09:25.067 fused_ordering(111) 00:09:25.067 fused_ordering(112) 00:09:25.067 fused_ordering(113) 00:09:25.067 fused_ordering(114) 00:09:25.067 fused_ordering(115) 00:09:25.067 fused_ordering(116) 00:09:25.067 fused_ordering(117) 00:09:25.067 fused_ordering(118) 00:09:25.067 fused_ordering(119) 00:09:25.067 fused_ordering(120) 00:09:25.067 fused_ordering(121) 00:09:25.067 fused_ordering(122) 00:09:25.067 fused_ordering(123) 00:09:25.067 fused_ordering(124) 00:09:25.067 fused_ordering(125) 00:09:25.067 fused_ordering(126) 00:09:25.067 fused_ordering(127) 00:09:25.067 fused_ordering(128) 00:09:25.067 fused_ordering(129) 00:09:25.067 fused_ordering(130) 00:09:25.067 fused_ordering(131) 00:09:25.067 fused_ordering(132) 00:09:25.067 fused_ordering(133) 00:09:25.067 fused_ordering(134) 00:09:25.067 fused_ordering(135) 00:09:25.067 fused_ordering(136) 00:09:25.067 fused_ordering(137) 00:09:25.067 fused_ordering(138) 00:09:25.067 fused_ordering(139) 00:09:25.067 fused_ordering(140) 00:09:25.067 fused_ordering(141) 00:09:25.067 fused_ordering(142) 00:09:25.067 fused_ordering(143) 00:09:25.067 fused_ordering(144) 00:09:25.067 fused_ordering(145) 00:09:25.067 fused_ordering(146) 00:09:25.067 fused_ordering(147) 00:09:25.067 fused_ordering(148) 00:09:25.067 fused_ordering(149) 00:09:25.067 fused_ordering(150) 00:09:25.067 fused_ordering(151) 00:09:25.067 fused_ordering(152) 00:09:25.067 fused_ordering(153) 00:09:25.067 fused_ordering(154) 00:09:25.067 fused_ordering(155) 00:09:25.067 fused_ordering(156) 00:09:25.067 fused_ordering(157) 00:09:25.067 fused_ordering(158) 00:09:25.067 fused_ordering(159) 00:09:25.067 fused_ordering(160) 00:09:25.067 fused_ordering(161) 00:09:25.067 fused_ordering(162) 00:09:25.067 fused_ordering(163) 00:09:25.067 fused_ordering(164) 00:09:25.067 fused_ordering(165) 00:09:25.067 fused_ordering(166) 00:09:25.067 fused_ordering(167) 00:09:25.067 fused_ordering(168) 00:09:25.067 fused_ordering(169) 00:09:25.067 fused_ordering(170) 00:09:25.067 fused_ordering(171) 00:09:25.067 fused_ordering(172) 00:09:25.067 fused_ordering(173) 00:09:25.067 fused_ordering(174) 00:09:25.067 fused_ordering(175) 00:09:25.067 fused_ordering(176) 00:09:25.067 fused_ordering(177) 00:09:25.067 fused_ordering(178) 00:09:25.067 fused_ordering(179) 00:09:25.067 fused_ordering(180) 00:09:25.067 fused_ordering(181) 00:09:25.067 fused_ordering(182) 00:09:25.067 fused_ordering(183) 00:09:25.067 fused_ordering(184) 00:09:25.067 fused_ordering(185) 00:09:25.067 fused_ordering(186) 00:09:25.067 fused_ordering(187) 00:09:25.067 fused_ordering(188) 00:09:25.067 fused_ordering(189) 00:09:25.067 fused_ordering(190) 00:09:25.067 fused_ordering(191) 00:09:25.067 fused_ordering(192) 00:09:25.067 fused_ordering(193) 00:09:25.067 fused_ordering(194) 00:09:25.067 fused_ordering(195) 00:09:25.067 fused_ordering(196) 00:09:25.067 fused_ordering(197) 00:09:25.067 fused_ordering(198) 00:09:25.067 fused_ordering(199) 00:09:25.067 fused_ordering(200) 00:09:25.067 fused_ordering(201) 00:09:25.067 fused_ordering(202) 00:09:25.067 fused_ordering(203) 00:09:25.067 fused_ordering(204) 00:09:25.067 fused_ordering(205) 00:09:25.641 fused_ordering(206) 00:09:25.641 fused_ordering(207) 00:09:25.641 fused_ordering(208) 00:09:25.641 fused_ordering(209) 00:09:25.641 fused_ordering(210) 00:09:25.641 fused_ordering(211) 00:09:25.641 fused_ordering(212) 00:09:25.641 fused_ordering(213) 00:09:25.641 fused_ordering(214) 00:09:25.641 fused_ordering(215) 00:09:25.641 fused_ordering(216) 00:09:25.641 fused_ordering(217) 00:09:25.641 fused_ordering(218) 00:09:25.641 fused_ordering(219) 00:09:25.641 fused_ordering(220) 00:09:25.641 fused_ordering(221) 00:09:25.641 fused_ordering(222) 00:09:25.641 fused_ordering(223) 00:09:25.641 fused_ordering(224) 00:09:25.641 fused_ordering(225) 00:09:25.641 fused_ordering(226) 00:09:25.641 fused_ordering(227) 00:09:25.641 fused_ordering(228) 00:09:25.641 fused_ordering(229) 00:09:25.641 fused_ordering(230) 00:09:25.641 fused_ordering(231) 00:09:25.641 fused_ordering(232) 00:09:25.641 fused_ordering(233) 00:09:25.641 fused_ordering(234) 00:09:25.641 fused_ordering(235) 00:09:25.641 fused_ordering(236) 00:09:25.641 fused_ordering(237) 00:09:25.641 fused_ordering(238) 00:09:25.641 fused_ordering(239) 00:09:25.641 fused_ordering(240) 00:09:25.641 fused_ordering(241) 00:09:25.641 fused_ordering(242) 00:09:25.641 fused_ordering(243) 00:09:25.641 fused_ordering(244) 00:09:25.641 fused_ordering(245) 00:09:25.641 fused_ordering(246) 00:09:25.641 fused_ordering(247) 00:09:25.641 fused_ordering(248) 00:09:25.641 fused_ordering(249) 00:09:25.641 fused_ordering(250) 00:09:25.641 fused_ordering(251) 00:09:25.641 fused_ordering(252) 00:09:25.641 fused_ordering(253) 00:09:25.641 fused_ordering(254) 00:09:25.641 fused_ordering(255) 00:09:25.641 fused_ordering(256) 00:09:25.641 fused_ordering(257) 00:09:25.641 fused_ordering(258) 00:09:25.641 fused_ordering(259) 00:09:25.641 fused_ordering(260) 00:09:25.641 fused_ordering(261) 00:09:25.641 fused_ordering(262) 00:09:25.641 fused_ordering(263) 00:09:25.641 fused_ordering(264) 00:09:25.641 fused_ordering(265) 00:09:25.641 fused_ordering(266) 00:09:25.641 fused_ordering(267) 00:09:25.641 fused_ordering(268) 00:09:25.641 fused_ordering(269) 00:09:25.641 fused_ordering(270) 00:09:25.641 fused_ordering(271) 00:09:25.641 fused_ordering(272) 00:09:25.641 fused_ordering(273) 00:09:25.641 fused_ordering(274) 00:09:25.641 fused_ordering(275) 00:09:25.641 fused_ordering(276) 00:09:25.641 fused_ordering(277) 00:09:25.641 fused_ordering(278) 00:09:25.641 fused_ordering(279) 00:09:25.641 fused_ordering(280) 00:09:25.641 fused_ordering(281) 00:09:25.641 fused_ordering(282) 00:09:25.641 fused_ordering(283) 00:09:25.641 fused_ordering(284) 00:09:25.641 fused_ordering(285) 00:09:25.641 fused_ordering(286) 00:09:25.641 fused_ordering(287) 00:09:25.641 fused_ordering(288) 00:09:25.641 fused_ordering(289) 00:09:25.641 fused_ordering(290) 00:09:25.641 fused_ordering(291) 00:09:25.641 fused_ordering(292) 00:09:25.641 fused_ordering(293) 00:09:25.641 fused_ordering(294) 00:09:25.641 fused_ordering(295) 00:09:25.641 fused_ordering(296) 00:09:25.641 fused_ordering(297) 00:09:25.641 fused_ordering(298) 00:09:25.641 fused_ordering(299) 00:09:25.641 fused_ordering(300) 00:09:25.641 fused_ordering(301) 00:09:25.641 fused_ordering(302) 00:09:25.641 fused_ordering(303) 00:09:25.641 fused_ordering(304) 00:09:25.641 fused_ordering(305) 00:09:25.641 fused_ordering(306) 00:09:25.641 fused_ordering(307) 00:09:25.641 fused_ordering(308) 00:09:25.641 fused_ordering(309) 00:09:25.641 fused_ordering(310) 00:09:25.641 fused_ordering(311) 00:09:25.641 fused_ordering(312) 00:09:25.641 fused_ordering(313) 00:09:25.641 fused_ordering(314) 00:09:25.641 fused_ordering(315) 00:09:25.641 fused_ordering(316) 00:09:25.641 fused_ordering(317) 00:09:25.641 fused_ordering(318) 00:09:25.641 fused_ordering(319) 00:09:25.641 fused_ordering(320) 00:09:25.641 fused_ordering(321) 00:09:25.641 fused_ordering(322) 00:09:25.641 fused_ordering(323) 00:09:25.641 fused_ordering(324) 00:09:25.641 fused_ordering(325) 00:09:25.641 fused_ordering(326) 00:09:25.641 fused_ordering(327) 00:09:25.641 fused_ordering(328) 00:09:25.641 fused_ordering(329) 00:09:25.641 fused_ordering(330) 00:09:25.641 fused_ordering(331) 00:09:25.641 fused_ordering(332) 00:09:25.641 fused_ordering(333) 00:09:25.641 fused_ordering(334) 00:09:25.641 fused_ordering(335) 00:09:25.641 fused_ordering(336) 00:09:25.641 fused_ordering(337) 00:09:25.641 fused_ordering(338) 00:09:25.641 fused_ordering(339) 00:09:25.641 fused_ordering(340) 00:09:25.641 fused_ordering(341) 00:09:25.641 fused_ordering(342) 00:09:25.641 fused_ordering(343) 00:09:25.641 fused_ordering(344) 00:09:25.641 fused_ordering(345) 00:09:25.641 fused_ordering(346) 00:09:25.641 fused_ordering(347) 00:09:25.641 fused_ordering(348) 00:09:25.641 fused_ordering(349) 00:09:25.641 fused_ordering(350) 00:09:25.641 fused_ordering(351) 00:09:25.641 fused_ordering(352) 00:09:25.641 fused_ordering(353) 00:09:25.641 fused_ordering(354) 00:09:25.641 fused_ordering(355) 00:09:25.641 fused_ordering(356) 00:09:25.641 fused_ordering(357) 00:09:25.641 fused_ordering(358) 00:09:25.641 fused_ordering(359) 00:09:25.641 fused_ordering(360) 00:09:25.641 fused_ordering(361) 00:09:25.641 fused_ordering(362) 00:09:25.641 fused_ordering(363) 00:09:25.641 fused_ordering(364) 00:09:25.641 fused_ordering(365) 00:09:25.641 fused_ordering(366) 00:09:25.641 fused_ordering(367) 00:09:25.641 fused_ordering(368) 00:09:25.641 fused_ordering(369) 00:09:25.641 fused_ordering(370) 00:09:25.641 fused_ordering(371) 00:09:25.641 fused_ordering(372) 00:09:25.641 fused_ordering(373) 00:09:25.641 fused_ordering(374) 00:09:25.641 fused_ordering(375) 00:09:25.641 fused_ordering(376) 00:09:25.641 fused_ordering(377) 00:09:25.641 fused_ordering(378) 00:09:25.641 fused_ordering(379) 00:09:25.641 fused_ordering(380) 00:09:25.641 fused_ordering(381) 00:09:25.641 fused_ordering(382) 00:09:25.641 fused_ordering(383) 00:09:25.641 fused_ordering(384) 00:09:25.641 fused_ordering(385) 00:09:25.641 fused_ordering(386) 00:09:25.641 fused_ordering(387) 00:09:25.641 fused_ordering(388) 00:09:25.641 fused_ordering(389) 00:09:25.641 fused_ordering(390) 00:09:25.641 fused_ordering(391) 00:09:25.641 fused_ordering(392) 00:09:25.641 fused_ordering(393) 00:09:25.641 fused_ordering(394) 00:09:25.641 fused_ordering(395) 00:09:25.641 fused_ordering(396) 00:09:25.641 fused_ordering(397) 00:09:25.641 fused_ordering(398) 00:09:25.641 fused_ordering(399) 00:09:25.641 fused_ordering(400) 00:09:25.641 fused_ordering(401) 00:09:25.641 fused_ordering(402) 00:09:25.641 fused_ordering(403) 00:09:25.641 fused_ordering(404) 00:09:25.641 fused_ordering(405) 00:09:25.641 fused_ordering(406) 00:09:25.641 fused_ordering(407) 00:09:25.641 fused_ordering(408) 00:09:25.641 fused_ordering(409) 00:09:25.641 fused_ordering(410) 00:09:25.902 fused_ordering(411) 00:09:25.902 fused_ordering(412) 00:09:25.902 fused_ordering(413) 00:09:25.902 fused_ordering(414) 00:09:25.902 fused_ordering(415) 00:09:25.902 fused_ordering(416) 00:09:25.902 fused_ordering(417) 00:09:25.902 fused_ordering(418) 00:09:25.902 fused_ordering(419) 00:09:25.902 fused_ordering(420) 00:09:25.902 fused_ordering(421) 00:09:25.902 fused_ordering(422) 00:09:25.902 fused_ordering(423) 00:09:25.902 fused_ordering(424) 00:09:25.902 fused_ordering(425) 00:09:25.902 fused_ordering(426) 00:09:25.902 fused_ordering(427) 00:09:25.902 fused_ordering(428) 00:09:25.902 fused_ordering(429) 00:09:25.902 fused_ordering(430) 00:09:25.902 fused_ordering(431) 00:09:25.902 fused_ordering(432) 00:09:25.902 fused_ordering(433) 00:09:25.902 fused_ordering(434) 00:09:25.902 fused_ordering(435) 00:09:25.902 fused_ordering(436) 00:09:25.902 fused_ordering(437) 00:09:25.902 fused_ordering(438) 00:09:25.902 fused_ordering(439) 00:09:25.902 fused_ordering(440) 00:09:25.902 fused_ordering(441) 00:09:25.902 fused_ordering(442) 00:09:25.902 fused_ordering(443) 00:09:25.902 fused_ordering(444) 00:09:25.902 fused_ordering(445) 00:09:25.902 fused_ordering(446) 00:09:25.902 fused_ordering(447) 00:09:25.902 fused_ordering(448) 00:09:25.902 fused_ordering(449) 00:09:25.902 fused_ordering(450) 00:09:25.902 fused_ordering(451) 00:09:25.902 fused_ordering(452) 00:09:25.902 fused_ordering(453) 00:09:25.902 fused_ordering(454) 00:09:25.902 fused_ordering(455) 00:09:25.902 fused_ordering(456) 00:09:25.902 fused_ordering(457) 00:09:25.902 fused_ordering(458) 00:09:25.902 fused_ordering(459) 00:09:25.902 fused_ordering(460) 00:09:25.902 fused_ordering(461) 00:09:25.902 fused_ordering(462) 00:09:25.902 fused_ordering(463) 00:09:25.902 fused_ordering(464) 00:09:25.902 fused_ordering(465) 00:09:25.902 fused_ordering(466) 00:09:25.902 fused_ordering(467) 00:09:25.902 fused_ordering(468) 00:09:25.902 fused_ordering(469) 00:09:25.902 fused_ordering(470) 00:09:25.902 fused_ordering(471) 00:09:25.902 fused_ordering(472) 00:09:25.902 fused_ordering(473) 00:09:25.902 fused_ordering(474) 00:09:25.902 fused_ordering(475) 00:09:25.902 fused_ordering(476) 00:09:25.902 fused_ordering(477) 00:09:25.902 fused_ordering(478) 00:09:25.902 fused_ordering(479) 00:09:25.902 fused_ordering(480) 00:09:25.902 fused_ordering(481) 00:09:25.902 fused_ordering(482) 00:09:25.902 fused_ordering(483) 00:09:25.902 fused_ordering(484) 00:09:25.902 fused_ordering(485) 00:09:25.902 fused_ordering(486) 00:09:25.902 fused_ordering(487) 00:09:25.902 fused_ordering(488) 00:09:25.902 fused_ordering(489) 00:09:25.902 fused_ordering(490) 00:09:25.902 fused_ordering(491) 00:09:25.902 fused_ordering(492) 00:09:25.902 fused_ordering(493) 00:09:25.902 fused_ordering(494) 00:09:25.902 fused_ordering(495) 00:09:25.902 fused_ordering(496) 00:09:25.902 fused_ordering(497) 00:09:25.902 fused_ordering(498) 00:09:25.902 fused_ordering(499) 00:09:25.902 fused_ordering(500) 00:09:25.902 fused_ordering(501) 00:09:25.902 fused_ordering(502) 00:09:25.902 fused_ordering(503) 00:09:25.902 fused_ordering(504) 00:09:25.902 fused_ordering(505) 00:09:25.902 fused_ordering(506) 00:09:25.902 fused_ordering(507) 00:09:25.902 fused_ordering(508) 00:09:25.902 fused_ordering(509) 00:09:25.902 fused_ordering(510) 00:09:25.902 fused_ordering(511) 00:09:25.902 fused_ordering(512) 00:09:25.902 fused_ordering(513) 00:09:25.902 fused_ordering(514) 00:09:25.902 fused_ordering(515) 00:09:25.902 fused_ordering(516) 00:09:25.902 fused_ordering(517) 00:09:25.902 fused_ordering(518) 00:09:25.902 fused_ordering(519) 00:09:25.902 fused_ordering(520) 00:09:25.902 fused_ordering(521) 00:09:25.902 fused_ordering(522) 00:09:25.902 fused_ordering(523) 00:09:25.902 fused_ordering(524) 00:09:25.902 fused_ordering(525) 00:09:25.902 fused_ordering(526) 00:09:25.902 fused_ordering(527) 00:09:25.902 fused_ordering(528) 00:09:25.902 fused_ordering(529) 00:09:25.902 fused_ordering(530) 00:09:25.902 fused_ordering(531) 00:09:25.902 fused_ordering(532) 00:09:25.902 fused_ordering(533) 00:09:25.902 fused_ordering(534) 00:09:25.902 fused_ordering(535) 00:09:25.902 fused_ordering(536) 00:09:25.902 fused_ordering(537) 00:09:25.902 fused_ordering(538) 00:09:25.902 fused_ordering(539) 00:09:25.902 fused_ordering(540) 00:09:25.902 fused_ordering(541) 00:09:25.902 fused_ordering(542) 00:09:25.902 fused_ordering(543) 00:09:25.902 fused_ordering(544) 00:09:25.902 fused_ordering(545) 00:09:25.902 fused_ordering(546) 00:09:25.902 fused_ordering(547) 00:09:25.902 fused_ordering(548) 00:09:25.902 fused_ordering(549) 00:09:25.902 fused_ordering(550) 00:09:25.902 fused_ordering(551) 00:09:25.902 fused_ordering(552) 00:09:25.902 fused_ordering(553) 00:09:25.902 fused_ordering(554) 00:09:25.902 fused_ordering(555) 00:09:25.902 fused_ordering(556) 00:09:25.902 fused_ordering(557) 00:09:25.902 fused_ordering(558) 00:09:25.902 fused_ordering(559) 00:09:25.902 fused_ordering(560) 00:09:25.902 fused_ordering(561) 00:09:25.902 fused_ordering(562) 00:09:25.902 fused_ordering(563) 00:09:25.902 fused_ordering(564) 00:09:25.902 fused_ordering(565) 00:09:25.902 fused_ordering(566) 00:09:25.902 fused_ordering(567) 00:09:25.902 fused_ordering(568) 00:09:25.902 fused_ordering(569) 00:09:25.902 fused_ordering(570) 00:09:25.902 fused_ordering(571) 00:09:25.902 fused_ordering(572) 00:09:25.902 fused_ordering(573) 00:09:25.902 fused_ordering(574) 00:09:25.902 fused_ordering(575) 00:09:25.902 fused_ordering(576) 00:09:25.902 fused_ordering(577) 00:09:25.902 fused_ordering(578) 00:09:25.902 fused_ordering(579) 00:09:25.902 fused_ordering(580) 00:09:25.902 fused_ordering(581) 00:09:25.902 fused_ordering(582) 00:09:25.902 fused_ordering(583) 00:09:25.902 fused_ordering(584) 00:09:25.902 fused_ordering(585) 00:09:25.902 fused_ordering(586) 00:09:25.902 fused_ordering(587) 00:09:25.902 fused_ordering(588) 00:09:25.902 fused_ordering(589) 00:09:25.902 fused_ordering(590) 00:09:25.902 fused_ordering(591) 00:09:25.902 fused_ordering(592) 00:09:25.902 fused_ordering(593) 00:09:25.902 fused_ordering(594) 00:09:25.902 fused_ordering(595) 00:09:25.902 fused_ordering(596) 00:09:25.902 fused_ordering(597) 00:09:25.902 fused_ordering(598) 00:09:25.902 fused_ordering(599) 00:09:25.902 fused_ordering(600) 00:09:25.902 fused_ordering(601) 00:09:25.902 fused_ordering(602) 00:09:25.902 fused_ordering(603) 00:09:25.903 fused_ordering(604) 00:09:25.903 fused_ordering(605) 00:09:25.903 fused_ordering(606) 00:09:25.903 fused_ordering(607) 00:09:25.903 fused_ordering(608) 00:09:25.903 fused_ordering(609) 00:09:25.903 fused_ordering(610) 00:09:25.903 fused_ordering(611) 00:09:25.903 fused_ordering(612) 00:09:25.903 fused_ordering(613) 00:09:25.903 fused_ordering(614) 00:09:25.903 fused_ordering(615) 00:09:26.473 fused_ordering(616) 00:09:26.473 fused_ordering(617) 00:09:26.473 fused_ordering(618) 00:09:26.473 fused_ordering(619) 00:09:26.473 fused_ordering(620) 00:09:26.473 fused_ordering(621) 00:09:26.473 fused_ordering(622) 00:09:26.473 fused_ordering(623) 00:09:26.473 fused_ordering(624) 00:09:26.473 fused_ordering(625) 00:09:26.473 fused_ordering(626) 00:09:26.473 fused_ordering(627) 00:09:26.473 fused_ordering(628) 00:09:26.473 fused_ordering(629) 00:09:26.473 fused_ordering(630) 00:09:26.473 fused_ordering(631) 00:09:26.473 fused_ordering(632) 00:09:26.473 fused_ordering(633) 00:09:26.473 fused_ordering(634) 00:09:26.473 fused_ordering(635) 00:09:26.473 fused_ordering(636) 00:09:26.473 fused_ordering(637) 00:09:26.473 fused_ordering(638) 00:09:26.473 fused_ordering(639) 00:09:26.473 fused_ordering(640) 00:09:26.473 fused_ordering(641) 00:09:26.473 fused_ordering(642) 00:09:26.473 fused_ordering(643) 00:09:26.473 fused_ordering(644) 00:09:26.473 fused_ordering(645) 00:09:26.473 fused_ordering(646) 00:09:26.473 fused_ordering(647) 00:09:26.473 fused_ordering(648) 00:09:26.473 fused_ordering(649) 00:09:26.473 fused_ordering(650) 00:09:26.473 fused_ordering(651) 00:09:26.473 fused_ordering(652) 00:09:26.473 fused_ordering(653) 00:09:26.473 fused_ordering(654) 00:09:26.473 fused_ordering(655) 00:09:26.473 fused_ordering(656) 00:09:26.473 fused_ordering(657) 00:09:26.473 fused_ordering(658) 00:09:26.473 fused_ordering(659) 00:09:26.473 fused_ordering(660) 00:09:26.473 fused_ordering(661) 00:09:26.473 fused_ordering(662) 00:09:26.473 fused_ordering(663) 00:09:26.473 fused_ordering(664) 00:09:26.473 fused_ordering(665) 00:09:26.473 fused_ordering(666) 00:09:26.473 fused_ordering(667) 00:09:26.473 fused_ordering(668) 00:09:26.473 fused_ordering(669) 00:09:26.473 fused_ordering(670) 00:09:26.473 fused_ordering(671) 00:09:26.473 fused_ordering(672) 00:09:26.473 fused_ordering(673) 00:09:26.473 fused_ordering(674) 00:09:26.473 fused_ordering(675) 00:09:26.473 fused_ordering(676) 00:09:26.473 fused_ordering(677) 00:09:26.473 fused_ordering(678) 00:09:26.473 fused_ordering(679) 00:09:26.473 fused_ordering(680) 00:09:26.473 fused_ordering(681) 00:09:26.473 fused_ordering(682) 00:09:26.473 fused_ordering(683) 00:09:26.473 fused_ordering(684) 00:09:26.473 fused_ordering(685) 00:09:26.473 fused_ordering(686) 00:09:26.473 fused_ordering(687) 00:09:26.473 fused_ordering(688) 00:09:26.473 fused_ordering(689) 00:09:26.473 fused_ordering(690) 00:09:26.473 fused_ordering(691) 00:09:26.473 fused_ordering(692) 00:09:26.473 fused_ordering(693) 00:09:26.473 fused_ordering(694) 00:09:26.473 fused_ordering(695) 00:09:26.473 fused_ordering(696) 00:09:26.473 fused_ordering(697) 00:09:26.473 fused_ordering(698) 00:09:26.473 fused_ordering(699) 00:09:26.473 fused_ordering(700) 00:09:26.473 fused_ordering(701) 00:09:26.473 fused_ordering(702) 00:09:26.473 fused_ordering(703) 00:09:26.473 fused_ordering(704) 00:09:26.473 fused_ordering(705) 00:09:26.473 fused_ordering(706) 00:09:26.473 fused_ordering(707) 00:09:26.473 fused_ordering(708) 00:09:26.473 fused_ordering(709) 00:09:26.473 fused_ordering(710) 00:09:26.473 fused_ordering(711) 00:09:26.473 fused_ordering(712) 00:09:26.473 fused_ordering(713) 00:09:26.473 fused_ordering(714) 00:09:26.473 fused_ordering(715) 00:09:26.473 fused_ordering(716) 00:09:26.473 fused_ordering(717) 00:09:26.473 fused_ordering(718) 00:09:26.473 fused_ordering(719) 00:09:26.473 fused_ordering(720) 00:09:26.473 fused_ordering(721) 00:09:26.473 fused_ordering(722) 00:09:26.473 fused_ordering(723) 00:09:26.473 fused_ordering(724) 00:09:26.473 fused_ordering(725) 00:09:26.473 fused_ordering(726) 00:09:26.473 fused_ordering(727) 00:09:26.473 fused_ordering(728) 00:09:26.473 fused_ordering(729) 00:09:26.473 fused_ordering(730) 00:09:26.473 fused_ordering(731) 00:09:26.473 fused_ordering(732) 00:09:26.473 fused_ordering(733) 00:09:26.473 fused_ordering(734) 00:09:26.473 fused_ordering(735) 00:09:26.473 fused_ordering(736) 00:09:26.473 fused_ordering(737) 00:09:26.473 fused_ordering(738) 00:09:26.473 fused_ordering(739) 00:09:26.473 fused_ordering(740) 00:09:26.473 fused_ordering(741) 00:09:26.473 fused_ordering(742) 00:09:26.473 fused_ordering(743) 00:09:26.473 fused_ordering(744) 00:09:26.473 fused_ordering(745) 00:09:26.473 fused_ordering(746) 00:09:26.473 fused_ordering(747) 00:09:26.473 fused_ordering(748) 00:09:26.473 fused_ordering(749) 00:09:26.473 fused_ordering(750) 00:09:26.473 fused_ordering(751) 00:09:26.473 fused_ordering(752) 00:09:26.473 fused_ordering(753) 00:09:26.473 fused_ordering(754) 00:09:26.473 fused_ordering(755) 00:09:26.473 fused_ordering(756) 00:09:26.473 fused_ordering(757) 00:09:26.473 fused_ordering(758) 00:09:26.473 fused_ordering(759) 00:09:26.473 fused_ordering(760) 00:09:26.474 fused_ordering(761) 00:09:26.474 fused_ordering(762) 00:09:26.474 fused_ordering(763) 00:09:26.474 fused_ordering(764) 00:09:26.474 fused_ordering(765) 00:09:26.474 fused_ordering(766) 00:09:26.474 fused_ordering(767) 00:09:26.474 fused_ordering(768) 00:09:26.474 fused_ordering(769) 00:09:26.474 fused_ordering(770) 00:09:26.474 fused_ordering(771) 00:09:26.474 fused_ordering(772) 00:09:26.474 fused_ordering(773) 00:09:26.474 fused_ordering(774) 00:09:26.474 fused_ordering(775) 00:09:26.474 fused_ordering(776) 00:09:26.474 fused_ordering(777) 00:09:26.474 fused_ordering(778) 00:09:26.474 fused_ordering(779) 00:09:26.474 fused_ordering(780) 00:09:26.474 fused_ordering(781) 00:09:26.474 fused_ordering(782) 00:09:26.474 fused_ordering(783) 00:09:26.474 fused_ordering(784) 00:09:26.474 fused_ordering(785) 00:09:26.474 fused_ordering(786) 00:09:26.474 fused_ordering(787) 00:09:26.474 fused_ordering(788) 00:09:26.474 fused_ordering(789) 00:09:26.474 fused_ordering(790) 00:09:26.474 fused_ordering(791) 00:09:26.474 fused_ordering(792) 00:09:26.474 fused_ordering(793) 00:09:26.474 fused_ordering(794) 00:09:26.474 fused_ordering(795) 00:09:26.474 fused_ordering(796) 00:09:26.474 fused_ordering(797) 00:09:26.474 fused_ordering(798) 00:09:26.474 fused_ordering(799) 00:09:26.474 fused_ordering(800) 00:09:26.474 fused_ordering(801) 00:09:26.474 fused_ordering(802) 00:09:26.474 fused_ordering(803) 00:09:26.474 fused_ordering(804) 00:09:26.474 fused_ordering(805) 00:09:26.474 fused_ordering(806) 00:09:26.474 fused_ordering(807) 00:09:26.474 fused_ordering(808) 00:09:26.474 fused_ordering(809) 00:09:26.474 fused_ordering(810) 00:09:26.474 fused_ordering(811) 00:09:26.474 fused_ordering(812) 00:09:26.474 fused_ordering(813) 00:09:26.474 fused_ordering(814) 00:09:26.474 fused_ordering(815) 00:09:26.474 fused_ordering(816) 00:09:26.474 fused_ordering(817) 00:09:26.474 fused_ordering(818) 00:09:26.474 fused_ordering(819) 00:09:26.474 fused_ordering(820) 00:09:27.043 fused_ordering(821) 00:09:27.043 fused_ordering(822) 00:09:27.043 fused_ordering(823) 00:09:27.043 fused_ordering(824) 00:09:27.043 fused_ordering(825) 00:09:27.043 fused_ordering(826) 00:09:27.043 fused_ordering(827) 00:09:27.043 fused_ordering(828) 00:09:27.043 fused_ordering(829) 00:09:27.043 fused_ordering(830) 00:09:27.043 fused_ordering(831) 00:09:27.043 fused_ordering(832) 00:09:27.043 fused_ordering(833) 00:09:27.043 fused_ordering(834) 00:09:27.043 fused_ordering(835) 00:09:27.043 fused_ordering(836) 00:09:27.043 fused_ordering(837) 00:09:27.043 fused_ordering(838) 00:09:27.043 fused_ordering(839) 00:09:27.043 fused_ordering(840) 00:09:27.043 fused_ordering(841) 00:09:27.043 fused_ordering(842) 00:09:27.043 fused_ordering(843) 00:09:27.043 fused_ordering(844) 00:09:27.043 fused_ordering(845) 00:09:27.043 fused_ordering(846) 00:09:27.043 fused_ordering(847) 00:09:27.043 fused_ordering(848) 00:09:27.043 fused_ordering(849) 00:09:27.043 fused_ordering(850) 00:09:27.043 fused_ordering(851) 00:09:27.043 fused_ordering(852) 00:09:27.043 fused_ordering(853) 00:09:27.043 fused_ordering(854) 00:09:27.043 fused_ordering(855) 00:09:27.043 fused_ordering(856) 00:09:27.043 fused_ordering(857) 00:09:27.043 fused_ordering(858) 00:09:27.043 fused_ordering(859) 00:09:27.043 fused_ordering(860) 00:09:27.043 fused_ordering(861) 00:09:27.043 fused_ordering(862) 00:09:27.043 fused_ordering(863) 00:09:27.043 fused_ordering(864) 00:09:27.043 fused_ordering(865) 00:09:27.043 fused_ordering(866) 00:09:27.043 fused_ordering(867) 00:09:27.043 fused_ordering(868) 00:09:27.043 fused_ordering(869) 00:09:27.043 fused_ordering(870) 00:09:27.043 fused_ordering(871) 00:09:27.043 fused_ordering(872) 00:09:27.043 fused_ordering(873) 00:09:27.043 fused_ordering(874) 00:09:27.043 fused_ordering(875) 00:09:27.043 fused_ordering(876) 00:09:27.043 fused_ordering(877) 00:09:27.043 fused_ordering(878) 00:09:27.043 fused_ordering(879) 00:09:27.043 fused_ordering(880) 00:09:27.043 fused_ordering(881) 00:09:27.043 fused_ordering(882) 00:09:27.043 fused_ordering(883) 00:09:27.043 fused_ordering(884) 00:09:27.043 fused_ordering(885) 00:09:27.043 fused_ordering(886) 00:09:27.043 fused_ordering(887) 00:09:27.043 fused_ordering(888) 00:09:27.043 fused_ordering(889) 00:09:27.043 fused_ordering(890) 00:09:27.043 fused_ordering(891) 00:09:27.043 fused_ordering(892) 00:09:27.044 fused_ordering(893) 00:09:27.044 fused_ordering(894) 00:09:27.044 fused_ordering(895) 00:09:27.044 fused_ordering(896) 00:09:27.044 fused_ordering(897) 00:09:27.044 fused_ordering(898) 00:09:27.044 fused_ordering(899) 00:09:27.044 fused_ordering(900) 00:09:27.044 fused_ordering(901) 00:09:27.044 fused_ordering(902) 00:09:27.044 fused_ordering(903) 00:09:27.044 fused_ordering(904) 00:09:27.044 fused_ordering(905) 00:09:27.044 fused_ordering(906) 00:09:27.044 fused_ordering(907) 00:09:27.044 fused_ordering(908) 00:09:27.044 fused_ordering(909) 00:09:27.044 fused_ordering(910) 00:09:27.044 fused_ordering(911) 00:09:27.044 fused_ordering(912) 00:09:27.044 fused_ordering(913) 00:09:27.044 fused_ordering(914) 00:09:27.044 fused_ordering(915) 00:09:27.044 fused_ordering(916) 00:09:27.044 fused_ordering(917) 00:09:27.044 fused_ordering(918) 00:09:27.044 fused_ordering(919) 00:09:27.044 fused_ordering(920) 00:09:27.044 fused_ordering(921) 00:09:27.044 fused_ordering(922) 00:09:27.044 fused_ordering(923) 00:09:27.044 fused_ordering(924) 00:09:27.044 fused_ordering(925) 00:09:27.044 fused_ordering(926) 00:09:27.044 fused_ordering(927) 00:09:27.044 fused_ordering(928) 00:09:27.044 fused_ordering(929) 00:09:27.044 fused_ordering(930) 00:09:27.044 fused_ordering(931) 00:09:27.044 fused_ordering(932) 00:09:27.044 fused_ordering(933) 00:09:27.044 fused_ordering(934) 00:09:27.044 fused_ordering(935) 00:09:27.044 fused_ordering(936) 00:09:27.044 fused_ordering(937) 00:09:27.044 fused_ordering(938) 00:09:27.044 fused_ordering(939) 00:09:27.044 fused_ordering(940) 00:09:27.044 fused_ordering(941) 00:09:27.044 fused_ordering(942) 00:09:27.044 fused_ordering(943) 00:09:27.044 fused_ordering(944) 00:09:27.044 fused_ordering(945) 00:09:27.044 fused_ordering(946) 00:09:27.044 fused_ordering(947) 00:09:27.044 fused_ordering(948) 00:09:27.044 fused_ordering(949) 00:09:27.044 fused_ordering(950) 00:09:27.044 fused_ordering(951) 00:09:27.044 fused_ordering(952) 00:09:27.044 fused_ordering(953) 00:09:27.044 fused_ordering(954) 00:09:27.044 fused_ordering(955) 00:09:27.044 fused_ordering(956) 00:09:27.044 fused_ordering(957) 00:09:27.044 fused_ordering(958) 00:09:27.044 fused_ordering(959) 00:09:27.044 fused_ordering(960) 00:09:27.044 fused_ordering(961) 00:09:27.044 fused_ordering(962) 00:09:27.044 fused_ordering(963) 00:09:27.044 fused_ordering(964) 00:09:27.044 fused_ordering(965) 00:09:27.044 fused_ordering(966) 00:09:27.044 fused_ordering(967) 00:09:27.044 fused_ordering(968) 00:09:27.044 fused_ordering(969) 00:09:27.044 fused_ordering(970) 00:09:27.044 fused_ordering(971) 00:09:27.044 fused_ordering(972) 00:09:27.044 fused_ordering(973) 00:09:27.044 fused_ordering(974) 00:09:27.044 fused_ordering(975) 00:09:27.044 fused_ordering(976) 00:09:27.044 fused_ordering(977) 00:09:27.044 fused_ordering(978) 00:09:27.044 fused_ordering(979) 00:09:27.044 fused_ordering(980) 00:09:27.044 fused_ordering(981) 00:09:27.044 fused_ordering(982) 00:09:27.044 fused_ordering(983) 00:09:27.044 fused_ordering(984) 00:09:27.044 fused_ordering(985) 00:09:27.044 fused_ordering(986) 00:09:27.044 fused_ordering(987) 00:09:27.044 fused_ordering(988) 00:09:27.044 fused_ordering(989) 00:09:27.044 fused_ordering(990) 00:09:27.044 fused_ordering(991) 00:09:27.044 fused_ordering(992) 00:09:27.044 fused_ordering(993) 00:09:27.044 fused_ordering(994) 00:09:27.044 fused_ordering(995) 00:09:27.044 fused_ordering(996) 00:09:27.044 fused_ordering(997) 00:09:27.044 fused_ordering(998) 00:09:27.044 fused_ordering(999) 00:09:27.044 fused_ordering(1000) 00:09:27.044 fused_ordering(1001) 00:09:27.044 fused_ordering(1002) 00:09:27.044 fused_ordering(1003) 00:09:27.044 fused_ordering(1004) 00:09:27.044 fused_ordering(1005) 00:09:27.044 fused_ordering(1006) 00:09:27.044 fused_ordering(1007) 00:09:27.044 fused_ordering(1008) 00:09:27.044 fused_ordering(1009) 00:09:27.044 fused_ordering(1010) 00:09:27.044 fused_ordering(1011) 00:09:27.044 fused_ordering(1012) 00:09:27.044 fused_ordering(1013) 00:09:27.044 fused_ordering(1014) 00:09:27.044 fused_ordering(1015) 00:09:27.044 fused_ordering(1016) 00:09:27.044 fused_ordering(1017) 00:09:27.044 fused_ordering(1018) 00:09:27.044 fused_ordering(1019) 00:09:27.044 fused_ordering(1020) 00:09:27.044 fused_ordering(1021) 00:09:27.044 fused_ordering(1022) 00:09:27.044 fused_ordering(1023) 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.044 rmmod nvme_tcp 00:09:27.044 rmmod nvme_fabrics 00:09:27.044 rmmod nvme_keyring 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 858826 ']' 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 858826 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 858826 ']' 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 858826 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:27.044 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 858826 00:09:27.304 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:27.304 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:27.304 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 858826' 00:09:27.304 killing process with pid 858826 00:09:27.304 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 858826 00:09:27.304 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 858826 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.566 11:46:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.474 11:46:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.474 00:09:29.474 real 0m7.294s 00:09:29.474 user 0m4.939s 00:09:29.474 sys 0m2.994s 00:09:29.474 11:46:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:29.474 11:46:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:29.474 ************************************ 00:09:29.474 END TEST nvmf_fused_ordering 00:09:29.474 ************************************ 00:09:29.474 11:46:18 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:29.474 11:46:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:29.474 11:46:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:29.474 11:46:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.474 ************************************ 00:09:29.474 START TEST nvmf_delete_subsystem 00:09:29.474 ************************************ 00:09:29.474 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:29.733 * Looking for test storage... 00:09:29.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.733 11:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.673 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.674 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.674 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.674 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.674 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.674 11:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:09:31.674 00:09:31.674 --- 10.0.0.2 ping statistics --- 00:09:31.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.674 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:09:31.674 00:09:31.674 --- 10.0.0.1 ping statistics --- 00:09:31.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.674 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=861051 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 861051 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 861051 ']' 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:31.674 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.674 [2024-07-12 11:46:21.133062] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:09:31.674 [2024-07-12 11:46:21.133135] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.674 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.932 [2024-07-12 11:46:21.200582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.932 [2024-07-12 11:46:21.318505] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.932 [2024-07-12 11:46:21.318568] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.932 [2024-07-12 11:46:21.318584] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.932 [2024-07-12 11:46:21.318598] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.932 [2024-07-12 11:46:21.318610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.932 [2024-07-12 11:46:21.321892] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.932 [2024-07-12 11:46:21.321905] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 [2024-07-12 11:46:21.463098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 [2024-07-12 11:46:21.479311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 NULL1 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 Delay0 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=861178 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:32.192 11:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:32.192 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.192 [2024-07-12 11:46:21.563968] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:34.098 11:46:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.098 11:46:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.098 11:46:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.356 Read completed with error (sct=0, sc=8) 00:09:34.356 Read completed with error (sct=0, sc=8) 00:09:34.356 Read completed with error (sct=0, sc=8) 00:09:34.356 Write completed with error (sct=0, sc=8) 00:09:34.356 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 [2024-07-12 11:46:23.605871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6980 is same with the state(5) to be set 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 starting I/O failed: -6 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 [2024-07-12 11:46:23.606553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5014000c00 is same with the state(5) to be set 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Write completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.357 Read completed with error (sct=0, sc=8) 00:09:34.358 Read completed with error (sct=0, sc=8) 00:09:34.358 Write completed with error (sct=0, sc=8) 00:09:35.294 [2024-07-12 11:46:24.578293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b7ab0 is same with the state(5) to be set 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 [2024-07-12 11:46:24.607621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b67a0 is same with the state(5) to be set 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 [2024-07-12 11:46:24.607826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b63e0 is same with the state(5) to be set 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 [2024-07-12 11:46:24.608226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f501400c780 is same with the state(5) to be set 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Write completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 Read completed with error (sct=0, sc=8) 00:09:35.294 [2024-07-12 11:46:24.608401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f501400bfe0 is same with the state(5) to be set 00:09:35.294 Initializing NVMe Controllers 00:09:35.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.294 Controller IO queue size 128, less than required. 00:09:35.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:35.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:35.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:35.294 Initialization complete. Launching workers. 00:09:35.294 ======================================================== 00:09:35.294 Latency(us) 00:09:35.294 Device Information : IOPS MiB/s Average min max 00:09:35.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.71 0.08 897439.01 623.58 1012133.55 00:09:35.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.73 0.08 901795.22 400.67 1013257.93 00:09:35.294 ======================================================== 00:09:35.294 Total : 336.43 0.16 899597.84 400.67 1013257.93 00:09:35.294 00:09:35.294 [2024-07-12 11:46:24.609196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b7ab0 (9): Bad file descriptor 00:09:35.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:35.294 11:46:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.294 11:46:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:35.294 11:46:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 861178 00:09:35.294 11:46:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 861178 00:09:35.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (861178) - No such process 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 861178 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 861178 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 861178 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.865 [2024-07-12 11:46:25.130124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=861599 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:35.865 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.865 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.865 [2024-07-12 11:46:25.186142] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:36.434 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.434 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:36.434 11:46:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.693 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.693 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:36.693 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.260 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.261 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:37.261 11:46:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.828 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.828 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:37.828 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.395 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.395 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:38.395 11:46:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.962 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.962 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:38.962 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.962 Initializing NVMe Controllers 00:09:38.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:38.962 Controller IO queue size 128, less than required. 00:09:38.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:38.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:38.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:38.963 Initialization complete. Launching workers. 00:09:38.963 ======================================================== 00:09:38.963 Latency(us) 00:09:38.963 Device Information : IOPS MiB/s Average min max 00:09:38.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005132.50 1000237.13 1044571.32 00:09:38.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004317.13 1000222.40 1012653.10 00:09:38.963 ======================================================== 00:09:38.963 Total : 256.00 0.12 1004724.82 1000222.40 1044571.32 00:09:38.963 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 861599 00:09:39.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (861599) - No such process 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 861599 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.227 rmmod nvme_tcp 00:09:39.227 rmmod nvme_fabrics 00:09:39.227 rmmod nvme_keyring 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:39.227 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 861051 ']' 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 861051 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 861051 ']' 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 861051 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 861051 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 861051' 00:09:39.488 killing process with pid 861051 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 861051 00:09:39.488 11:46:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 861051 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.748 11:46:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.658 11:46:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.658 00:09:41.658 real 0m12.147s 00:09:41.658 user 0m27.522s 00:09:41.658 sys 0m2.859s 00:09:41.658 11:46:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.658 11:46:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.658 ************************************ 00:09:41.658 END TEST nvmf_delete_subsystem 00:09:41.658 ************************************ 00:09:41.658 11:46:31 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:41.658 11:46:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:41.658 11:46:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.658 11:46:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.658 ************************************ 00:09:41.658 START TEST nvmf_ns_masking 00:09:41.658 ************************************ 00:09:41.658 11:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:41.916 * Looking for test storage... 00:09:41.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.916 11:46:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=91fe9bbc-47ba-44ef-960a-141ce699815f 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.917 11:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:43.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:43.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:43.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:43.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.818 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:09:43.819 00:09:43.819 --- 10.0.0.2 ping statistics --- 00:09:43.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.819 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:43.819 00:09:43.819 --- 10.0.0.1 ping statistics --- 00:09:43.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.819 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.819 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=863957 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 863957 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 863957 ']' 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:44.077 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 [2024-07-12 11:46:33.362777] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:09:44.077 [2024-07-12 11:46:33.362879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.077 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.077 [2024-07-12 11:46:33.430277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.077 [2024-07-12 11:46:33.542856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.077 [2024-07-12 11:46:33.542934] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.077 [2024-07-12 11:46:33.542949] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.077 [2024-07-12 11:46:33.542960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.077 [2024-07-12 11:46:33.542969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.077 [2024-07-12 11:46:33.543049] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.077 [2024-07-12 11:46:33.543115] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.077 [2024-07-12 11:46:33.543182] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.077 [2024-07-12 11:46:33.543185] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.335 11:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:44.591 [2024-07-12 11:46:33.969584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.591 11:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:44.591 11:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:44.591 11:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:44.848 Malloc1 00:09:44.848 11:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:45.105 Malloc2 00:09:45.105 11:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:45.362 11:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:45.618 11:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.874 [2024-07-12 11:46:35.227998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 91fe9bbc-47ba-44ef-960a-141ce699815f -a 10.0.0.2 -s 4420 -i 4 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:45.874 11:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.405 [ 0]:0x1 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2deb7d9785e14f33b4eac0230b129e79 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2deb7d9785e14f33b4eac0230b129e79 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.405 [ 0]:0x1 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.405 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2deb7d9785e14f33b4eac0230b129e79 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2deb7d9785e14f33b4eac0230b129e79 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.406 [ 1]:0x2 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:48.406 11:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.664 11:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.922 11:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 91fe9bbc-47ba-44ef-960a-141ce699815f -a 10.0.0.2 -s 4420 -i 4 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:09:49.180 11:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:51.775 [ 0]:0x2 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.775 11:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:51.775 [ 0]:0x1 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2deb7d9785e14f33b4eac0230b129e79 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2deb7d9785e14f33b4eac0230b129e79 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:51.775 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:51.776 [ 1]:0x2 00:09:51.776 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:51.776 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:51.776 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:51.776 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.776 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:52.033 [ 0]:0x2 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.033 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:52.291 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:52.291 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 91fe9bbc-47ba-44ef-960a-141ce699815f -a 10.0.0.2 -s 4420 -i 4 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:09:52.549 11:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:54.451 [ 0]:0x1 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.451 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2deb7d9785e14f33b4eac0230b129e79 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2deb7d9785e14f33b4eac0230b129e79 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:54.709 [ 1]:0x2 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.709 11:46:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:54.968 [ 0]:0x2 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:54.968 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:55.227 [2024-07-12 11:46:44.562341] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:55.227 request: 00:09:55.227 { 00:09:55.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.227 "nsid": 2, 00:09:55.227 "host": "nqn.2016-06.io.spdk:host1", 00:09:55.227 "method": "nvmf_ns_remove_host", 00:09:55.227 "req_id": 1 00:09:55.227 } 00:09:55.227 Got JSON-RPC error response 00:09:55.227 response: 00:09:55.227 { 00:09:55.227 "code": -32602, 00:09:55.227 "message": "Invalid parameters" 00:09:55.227 } 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:55.227 [ 0]:0x2 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17520a92c9804d73a4ea2676d8ed996d 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17520a92c9804d73a4ea2676d8ed996d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.227 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.485 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.485 rmmod nvme_tcp 00:09:55.485 rmmod nvme_fabrics 00:09:55.485 rmmod nvme_keyring 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 863957 ']' 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 863957 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 863957 ']' 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 863957 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:55.744 11:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 863957 00:09:55.744 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:55.744 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:55.744 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 863957' 00:09:55.744 killing process with pid 863957 00:09:55.744 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 863957 00:09:55.744 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 863957 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.003 11:46:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.909 11:46:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.909 00:09:57.909 real 0m16.260s 00:09:57.909 user 0m50.275s 00:09:57.909 sys 0m3.715s 00:09:57.909 11:46:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:57.909 11:46:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:57.909 ************************************ 00:09:57.909 END TEST nvmf_ns_masking 00:09:57.909 ************************************ 00:09:58.167 11:46:47 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:58.167 11:46:47 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:58.167 11:46:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:58.167 11:46:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:58.167 11:46:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.167 ************************************ 00:09:58.167 START TEST nvmf_nvme_cli 00:09:58.167 ************************************ 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:58.167 * Looking for test storage... 00:09:58.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.167 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.168 11:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:00.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:00.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.074 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:00.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:00.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:10:00.075 00:10:00.075 --- 10.0.0.2 ping statistics --- 00:10:00.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.075 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:00.075 00:10:00.075 --- 10.0.0.1 ping statistics --- 00:10:00.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.075 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=867381 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 867381 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 867381 ']' 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:00.075 11:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.075 [2024-07-12 11:46:49.548379] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:10:00.075 [2024-07-12 11:46:49.548475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.334 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.334 [2024-07-12 11:46:49.618721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.334 [2024-07-12 11:46:49.739863] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.334 [2024-07-12 11:46:49.739919] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.334 [2024-07-12 11:46:49.739945] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.334 [2024-07-12 11:46:49.739957] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.334 [2024-07-12 11:46:49.739968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.334 [2024-07-12 11:46:49.740042] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.334 [2024-07-12 11:46:49.740101] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.334 [2024-07-12 11:46:49.740075] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.334 [2024-07-12 11:46:49.740104] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 [2024-07-12 11:46:50.567955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 Malloc0 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 Malloc1 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 [2024-07-12 11:46:50.654549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 11:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.269 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:01.529 00:10:01.529 Discovery Log Number of Records 2, Generation counter 2 00:10:01.529 =====Discovery Log Entry 0====== 00:10:01.529 trtype: tcp 00:10:01.529 adrfam: ipv4 00:10:01.529 subtype: current discovery subsystem 00:10:01.529 treq: not required 00:10:01.529 portid: 0 00:10:01.529 trsvcid: 4420 00:10:01.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:01.529 traddr: 10.0.0.2 00:10:01.529 eflags: explicit discovery connections, duplicate discovery information 00:10:01.529 sectype: none 00:10:01.529 =====Discovery Log Entry 1====== 00:10:01.529 trtype: tcp 00:10:01.529 adrfam: ipv4 00:10:01.529 subtype: nvme subsystem 00:10:01.529 treq: not required 00:10:01.529 portid: 0 00:10:01.529 trsvcid: 4420 00:10:01.529 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:01.529 traddr: 10.0.0.2 00:10:01.529 eflags: none 00:10:01.529 sectype: none 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:01.529 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:01.530 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:01.530 11:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:01.530 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:01.530 11:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:10:02.097 11:46:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:04.632 /dev/nvme0n1 ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.632 rmmod nvme_tcp 00:10:04.632 rmmod nvme_fabrics 00:10:04.632 rmmod nvme_keyring 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 867381 ']' 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 867381 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 867381 ']' 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 867381 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 867381 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 867381' 00:10:04.632 killing process with pid 867381 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 867381 00:10:04.632 11:46:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 867381 00:10:04.632 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.632 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.632 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.632 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.633 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.633 11:46:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.633 11:46:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.633 11:46:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.172 11:46:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.172 00:10:07.172 real 0m8.699s 00:10:07.172 user 0m17.823s 00:10:07.172 sys 0m2.120s 00:10:07.172 11:46:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:07.172 11:46:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:07.172 ************************************ 00:10:07.172 END TEST nvmf_nvme_cli 00:10:07.172 ************************************ 00:10:07.172 11:46:56 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:07.172 11:46:56 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:07.172 11:46:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:07.172 11:46:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:07.172 11:46:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.172 ************************************ 00:10:07.172 START TEST nvmf_vfio_user 00:10:07.172 ************************************ 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:07.172 * Looking for test storage... 00:10:07.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.172 11:46:56 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=868309 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 868309' 00:10:07.173 Process pid: 868309 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 868309 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 868309 ']' 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 [2024-07-12 11:46:56.305501] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:10:07.173 [2024-07-12 11:46:56.305576] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.173 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.173 [2024-07-12 11:46:56.365015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.173 [2024-07-12 11:46:56.474020] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.173 [2024-07-12 11:46:56.474067] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.173 [2024-07-12 11:46:56.474082] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.173 [2024-07-12 11:46:56.474095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.173 [2024-07-12 11:46:56.474106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.173 [2024-07-12 11:46:56.474183] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.173 [2024-07-12 11:46:56.474241] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.173 [2024-07-12 11:46:56.474311] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.173 [2024-07-12 11:46:56.474314] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:10:07.173 11:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:08.550 11:46:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:08.808 Malloc1 00:10:08.808 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:09.066 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:09.324 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:09.581 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:09.582 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:09.582 11:46:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:09.839 Malloc2 00:10:09.839 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:10.096 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:10.353 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:10.612 11:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:10.612 [2024-07-12 11:46:59.928484] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:10:10.612 [2024-07-12 11:46:59.928524] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868799 ] 00:10:10.612 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.612 [2024-07-12 11:46:59.964162] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:10.612 [2024-07-12 11:46:59.972373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:10.612 [2024-07-12 11:46:59.972402] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ad828f000 00:10:10.612 [2024-07-12 11:46:59.973373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.974364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.975370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.976380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.977384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.978390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.612 [2024-07-12 11:46:59.979398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.613 [2024-07-12 11:46:59.980404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.613 [2024-07-12 11:46:59.981408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:10.613 [2024-07-12 11:46:59.981431] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5ad8284000 00:10:10.613 [2024-07-12 11:46:59.982548] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:10.613 [2024-07-12 11:46:59.998526] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:10.613 [2024-07-12 11:46:59.998563] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:10.613 [2024-07-12 11:47:00.003552] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:10.613 [2024-07-12 11:47:00.003617] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:10.613 [2024-07-12 11:47:00.003714] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:10.613 [2024-07-12 11:47:00.003744] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:10.613 [2024-07-12 11:47:00.003755] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:10.613 [2024-07-12 11:47:00.004547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:10.613 [2024-07-12 11:47:00.004567] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:10.613 [2024-07-12 11:47:00.004581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:10.613 [2024-07-12 11:47:00.005546] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:10.613 [2024-07-12 11:47:00.005572] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:10.613 [2024-07-12 11:47:00.005587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.006558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:10.613 [2024-07-12 11:47:00.006578] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.007566] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:10.613 [2024-07-12 11:47:00.007586] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:10.613 [2024-07-12 11:47:00.007596] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.007608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.007717] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:10.613 [2024-07-12 11:47:00.007726] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.007734] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:10.613 [2024-07-12 11:47:00.008569] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:10.613 [2024-07-12 11:47:00.009578] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:10.613 [2024-07-12 11:47:00.010585] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:10.613 [2024-07-12 11:47:00.011581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:10.613 [2024-07-12 11:47:00.011679] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:10.613 [2024-07-12 11:47:00.012599] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:10.613 [2024-07-12 11:47:00.012618] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:10.613 [2024-07-12 11:47:00.012627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.012651] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:10.613 [2024-07-12 11:47:00.012665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.012693] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.613 [2024-07-12 11:47:00.012704] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.613 [2024-07-12 11:47:00.012724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.012780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:10.613 [2024-07-12 11:47:00.012801] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:10.613 [2024-07-12 11:47:00.012810] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:10.613 [2024-07-12 11:47:00.012818] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:10.613 [2024-07-12 11:47:00.012829] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:10.613 [2024-07-12 11:47:00.012838] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:10.613 [2024-07-12 11:47:00.012845] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:10.613 [2024-07-12 11:47:00.012876] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.012891] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.012918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.012937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:10.613 [2024-07-12 11:47:00.012955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.613 [2024-07-12 11:47:00.012968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.613 [2024-07-12 11:47:00.012981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.613 [2024-07-12 11:47:00.012994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.613 [2024-07-12 11:47:00.013003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.013046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:10.613 [2024-07-12 11:47:00.013057] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:10.613 [2024-07-12 11:47:00.013066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013088] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:10.613 [2024-07-12 11:47:00.013177] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013229] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:10.613 [2024-07-12 11:47:00.013237] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:10.613 [2024-07-12 11:47:00.013247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.013265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:10.613 [2024-07-12 11:47:00.013282] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:10.613 [2024-07-12 11:47:00.013302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:10.613 [2024-07-12 11:47:00.013329] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.613 [2024-07-12 11:47:00.013338] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.613 [2024-07-12 11:47:00.013348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.613 [2024-07-12 11:47:00.013370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013391] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013418] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.614 [2024-07-12 11:47:00.013427] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.614 [2024-07-12 11:47:00.013436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013466] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013492] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013502] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013510] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013519] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:10.614 [2024-07-12 11:47:00.013526] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:10.614 [2024-07-12 11:47:00.013538] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:10.614 [2024-07-12 11:47:00.013568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013692] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:10.614 [2024-07-12 11:47:00.013702] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:10.614 [2024-07-12 11:47:00.013708] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:10.614 [2024-07-12 11:47:00.013715] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:10.614 [2024-07-12 11:47:00.013724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:10.614 [2024-07-12 11:47:00.013737] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:10.614 [2024-07-12 11:47:00.013745] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:10.614 [2024-07-12 11:47:00.013754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013766] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:10.614 [2024-07-12 11:47:00.013775] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.614 [2024-07-12 11:47:00.013784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013797] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:10.614 [2024-07-12 11:47:00.013805] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:10.614 [2024-07-12 11:47:00.013814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:10.614 [2024-07-12 11:47:00.013826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:10.614 [2024-07-12 11:47:00.013908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:10.614 ===================================================== 00:10:10.614 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:10.614 ===================================================== 00:10:10.614 Controller Capabilities/Features 00:10:10.614 ================================ 00:10:10.614 Vendor ID: 4e58 00:10:10.614 Subsystem Vendor ID: 4e58 00:10:10.614 Serial Number: SPDK1 00:10:10.614 Model Number: SPDK bdev Controller 00:10:10.614 Firmware Version: 24.09 00:10:10.614 Recommended Arb Burst: 6 00:10:10.614 IEEE OUI Identifier: 8d 6b 50 00:10:10.614 Multi-path I/O 00:10:10.614 May have multiple subsystem ports: Yes 00:10:10.614 May have multiple controllers: Yes 00:10:10.614 Associated with SR-IOV VF: No 00:10:10.614 Max Data Transfer Size: 131072 00:10:10.614 Max Number of Namespaces: 32 00:10:10.614 Max Number of I/O Queues: 127 00:10:10.614 NVMe Specification Version (VS): 1.3 00:10:10.614 NVMe Specification Version (Identify): 1.3 00:10:10.614 Maximum Queue Entries: 256 00:10:10.614 Contiguous Queues Required: Yes 00:10:10.614 Arbitration Mechanisms Supported 00:10:10.614 Weighted Round Robin: Not Supported 00:10:10.614 Vendor Specific: Not Supported 00:10:10.614 Reset Timeout: 15000 ms 00:10:10.614 Doorbell Stride: 4 bytes 00:10:10.614 NVM Subsystem Reset: Not Supported 00:10:10.614 Command Sets Supported 00:10:10.614 NVM Command Set: Supported 00:10:10.614 Boot Partition: Not Supported 00:10:10.614 Memory Page Size Minimum: 4096 bytes 00:10:10.614 Memory Page Size Maximum: 4096 bytes 00:10:10.614 Persistent Memory Region: Not Supported 00:10:10.614 Optional Asynchronous Events Supported 00:10:10.614 Namespace Attribute Notices: Supported 00:10:10.614 Firmware Activation Notices: Not Supported 00:10:10.614 ANA Change Notices: Not Supported 00:10:10.614 PLE Aggregate Log Change Notices: Not Supported 00:10:10.614 LBA Status Info Alert Notices: Not Supported 00:10:10.615 EGE Aggregate Log Change Notices: Not Supported 00:10:10.615 Normal NVM Subsystem Shutdown event: Not Supported 00:10:10.615 Zone Descriptor Change Notices: Not Supported 00:10:10.615 Discovery Log Change Notices: Not Supported 00:10:10.615 Controller Attributes 00:10:10.615 128-bit Host Identifier: Supported 00:10:10.615 Non-Operational Permissive Mode: Not Supported 00:10:10.615 NVM Sets: Not Supported 00:10:10.615 Read Recovery Levels: Not Supported 00:10:10.615 Endurance Groups: Not Supported 00:10:10.615 Predictable Latency Mode: Not Supported 00:10:10.615 Traffic Based Keep ALive: Not Supported 00:10:10.615 Namespace Granularity: Not Supported 00:10:10.615 SQ Associations: Not Supported 00:10:10.615 UUID List: Not Supported 00:10:10.615 Multi-Domain Subsystem: Not Supported 00:10:10.615 Fixed Capacity Management: Not Supported 00:10:10.615 Variable Capacity Management: Not Supported 00:10:10.615 Delete Endurance Group: Not Supported 00:10:10.615 Delete NVM Set: Not Supported 00:10:10.615 Extended LBA Formats Supported: Not Supported 00:10:10.615 Flexible Data Placement Supported: Not Supported 00:10:10.615 00:10:10.615 Controller Memory Buffer Support 00:10:10.615 ================================ 00:10:10.615 Supported: No 00:10:10.615 00:10:10.615 Persistent Memory Region Support 00:10:10.615 ================================ 00:10:10.615 Supported: No 00:10:10.615 00:10:10.615 Admin Command Set Attributes 00:10:10.615 ============================ 00:10:10.615 Security Send/Receive: Not Supported 00:10:10.615 Format NVM: Not Supported 00:10:10.615 Firmware Activate/Download: Not Supported 00:10:10.615 Namespace Management: Not Supported 00:10:10.615 Device Self-Test: Not Supported 00:10:10.615 Directives: Not Supported 00:10:10.615 NVMe-MI: Not Supported 00:10:10.615 Virtualization Management: Not Supported 00:10:10.615 Doorbell Buffer Config: Not Supported 00:10:10.615 Get LBA Status Capability: Not Supported 00:10:10.615 Command & Feature Lockdown Capability: Not Supported 00:10:10.615 Abort Command Limit: 4 00:10:10.615 Async Event Request Limit: 4 00:10:10.615 Number of Firmware Slots: N/A 00:10:10.615 Firmware Slot 1 Read-Only: N/A 00:10:10.615 Firmware Activation Without Reset: N/A 00:10:10.615 Multiple Update Detection Support: N/A 00:10:10.615 Firmware Update Granularity: No Information Provided 00:10:10.615 Per-Namespace SMART Log: No 00:10:10.615 Asymmetric Namespace Access Log Page: Not Supported 00:10:10.615 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:10.615 Command Effects Log Page: Supported 00:10:10.615 Get Log Page Extended Data: Supported 00:10:10.615 Telemetry Log Pages: Not Supported 00:10:10.615 Persistent Event Log Pages: Not Supported 00:10:10.615 Supported Log Pages Log Page: May Support 00:10:10.615 Commands Supported & Effects Log Page: Not Supported 00:10:10.615 Feature Identifiers & Effects Log Page:May Support 00:10:10.615 NVMe-MI Commands & Effects Log Page: May Support 00:10:10.615 Data Area 4 for Telemetry Log: Not Supported 00:10:10.615 Error Log Page Entries Supported: 128 00:10:10.615 Keep Alive: Supported 00:10:10.615 Keep Alive Granularity: 10000 ms 00:10:10.615 00:10:10.615 NVM Command Set Attributes 00:10:10.615 ========================== 00:10:10.615 Submission Queue Entry Size 00:10:10.615 Max: 64 00:10:10.615 Min: 64 00:10:10.615 Completion Queue Entry Size 00:10:10.615 Max: 16 00:10:10.615 Min: 16 00:10:10.615 Number of Namespaces: 32 00:10:10.615 Compare Command: Supported 00:10:10.615 Write Uncorrectable Command: Not Supported 00:10:10.615 Dataset Management Command: Supported 00:10:10.615 Write Zeroes Command: Supported 00:10:10.615 Set Features Save Field: Not Supported 00:10:10.615 Reservations: Not Supported 00:10:10.615 Timestamp: Not Supported 00:10:10.615 Copy: Supported 00:10:10.615 Volatile Write Cache: Present 00:10:10.615 Atomic Write Unit (Normal): 1 00:10:10.615 Atomic Write Unit (PFail): 1 00:10:10.615 Atomic Compare & Write Unit: 1 00:10:10.615 Fused Compare & Write: Supported 00:10:10.615 Scatter-Gather List 00:10:10.615 SGL Command Set: Supported (Dword aligned) 00:10:10.615 SGL Keyed: Not Supported 00:10:10.615 SGL Bit Bucket Descriptor: Not Supported 00:10:10.615 SGL Metadata Pointer: Not Supported 00:10:10.615 Oversized SGL: Not Supported 00:10:10.615 SGL Metadata Address: Not Supported 00:10:10.615 SGL Offset: Not Supported 00:10:10.615 Transport SGL Data Block: Not Supported 00:10:10.615 Replay Protected Memory Block: Not Supported 00:10:10.615 00:10:10.615 Firmware Slot Information 00:10:10.615 ========================= 00:10:10.615 Active slot: 1 00:10:10.615 Slot 1 Firmware Revision: 24.09 00:10:10.615 00:10:10.615 00:10:10.615 Commands Supported and Effects 00:10:10.615 ============================== 00:10:10.615 Admin Commands 00:10:10.615 -------------- 00:10:10.615 Get Log Page (02h): Supported 00:10:10.615 Identify (06h): Supported 00:10:10.615 Abort (08h): Supported 00:10:10.615 Set Features (09h): Supported 00:10:10.615 Get Features (0Ah): Supported 00:10:10.615 Asynchronous Event Request (0Ch): Supported 00:10:10.615 Keep Alive (18h): Supported 00:10:10.615 I/O Commands 00:10:10.615 ------------ 00:10:10.615 Flush (00h): Supported LBA-Change 00:10:10.615 Write (01h): Supported LBA-Change 00:10:10.615 Read (02h): Supported 00:10:10.615 Compare (05h): Supported 00:10:10.615 Write Zeroes (08h): Supported LBA-Change 00:10:10.615 Dataset Management (09h): Supported LBA-Change 00:10:10.615 Copy (19h): Supported LBA-Change 00:10:10.615 Unknown (79h): Supported LBA-Change 00:10:10.615 Unknown (7Ah): Supported 00:10:10.615 00:10:10.615 Error Log 00:10:10.616 ========= 00:10:10.616 00:10:10.616 Arbitration 00:10:10.616 =========== 00:10:10.616 Arbitration Burst: 1 00:10:10.616 00:10:10.616 Power Management 00:10:10.616 ================ 00:10:10.616 Number of Power States: 1 00:10:10.616 Current Power State: Power State #0 00:10:10.616 Power State #0: 00:10:10.616 Max Power: 0.00 W 00:10:10.616 Non-Operational State: Operational 00:10:10.616 Entry Latency: Not Reported 00:10:10.616 Exit Latency: Not Reported 00:10:10.616 Relative Read Throughput: 0 00:10:10.616 Relative Read Latency: 0 00:10:10.616 Relative Write Throughput: 0 00:10:10.616 Relative Write Latency: 0 00:10:10.616 Idle Power: Not Reported 00:10:10.616 Active Power: Not Reported 00:10:10.616 Non-Operational Permissive Mode: Not Supported 00:10:10.616 00:10:10.616 Health Information 00:10:10.616 ================== 00:10:10.616 Critical Warnings: 00:10:10.616 Available Spare Space: OK 00:10:10.616 Temperature: OK 00:10:10.616 Device Reliability: OK 00:10:10.616 Read Only: No 00:10:10.616 Volatile Memory Backup: OK 00:10:10.616 Current Temperature: 0 Kelvin (-2[2024-07-12 11:47:00.014034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:10.616 [2024-07-12 11:47:00.014055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:10.616 [2024-07-12 11:47:00.014093] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:10.616 [2024-07-12 11:47:00.014111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.616 [2024-07-12 11:47:00.014122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.616 [2024-07-12 11:47:00.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.616 [2024-07-12 11:47:00.014143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.616 [2024-07-12 11:47:00.017879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:10.616 [2024-07-12 11:47:00.017903] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:10.616 [2024-07-12 11:47:00.018623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:10.616 [2024-07-12 11:47:00.018696] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:10.616 [2024-07-12 11:47:00.018711] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:10.616 [2024-07-12 11:47:00.019633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:10.616 [2024-07-12 11:47:00.019656] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:10.616 [2024-07-12 11:47:00.019777] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:10.616 [2024-07-12 11:47:00.021685] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:10.616 73 Celsius) 00:10:10.616 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:10.616 Available Spare: 0% 00:10:10.616 Available Spare Threshold: 0% 00:10:10.616 Life Percentage Used: 0% 00:10:10.616 Data Units Read: 0 00:10:10.616 Data Units Written: 0 00:10:10.616 Host Read Commands: 0 00:10:10.616 Host Write Commands: 0 00:10:10.616 Controller Busy Time: 0 minutes 00:10:10.616 Power Cycles: 0 00:10:10.616 Power On Hours: 0 hours 00:10:10.616 Unsafe Shutdowns: 0 00:10:10.616 Unrecoverable Media Errors: 0 00:10:10.616 Lifetime Error Log Entries: 0 00:10:10.616 Warning Temperature Time: 0 minutes 00:10:10.616 Critical Temperature Time: 0 minutes 00:10:10.616 00:10:10.616 Number of Queues 00:10:10.616 ================ 00:10:10.616 Number of I/O Submission Queues: 127 00:10:10.616 Number of I/O Completion Queues: 127 00:10:10.616 00:10:10.616 Active Namespaces 00:10:10.616 ================= 00:10:10.616 Namespace ID:1 00:10:10.616 Error Recovery Timeout: Unlimited 00:10:10.616 Command Set Identifier: NVM (00h) 00:10:10.616 Deallocate: Supported 00:10:10.616 Deallocated/Unwritten Error: Not Supported 00:10:10.616 Deallocated Read Value: Unknown 00:10:10.616 Deallocate in Write Zeroes: Not Supported 00:10:10.616 Deallocated Guard Field: 0xFFFF 00:10:10.616 Flush: Supported 00:10:10.616 Reservation: Supported 00:10:10.616 Namespace Sharing Capabilities: Multiple Controllers 00:10:10.616 Size (in LBAs): 131072 (0GiB) 00:10:10.616 Capacity (in LBAs): 131072 (0GiB) 00:10:10.616 Utilization (in LBAs): 131072 (0GiB) 00:10:10.616 NGUID: DF9B9183405A4B7FA20724585424C2B4 00:10:10.616 UUID: df9b9183-405a-4b7f-a207-24585424c2b4 00:10:10.616 Thin Provisioning: Not Supported 00:10:10.616 Per-NS Atomic Units: Yes 00:10:10.616 Atomic Boundary Size (Normal): 0 00:10:10.616 Atomic Boundary Size (PFail): 0 00:10:10.616 Atomic Boundary Offset: 0 00:10:10.616 Maximum Single Source Range Length: 65535 00:10:10.616 Maximum Copy Length: 65535 00:10:10.616 Maximum Source Range Count: 1 00:10:10.616 NGUID/EUI64 Never Reused: No 00:10:10.616 Namespace Write Protected: No 00:10:10.616 Number of LBA Formats: 1 00:10:10.616 Current LBA Format: LBA Format #00 00:10:10.616 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.616 00:10:10.616 11:47:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:10.616 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.875 [2024-07-12 11:47:00.261869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:16.238 Initializing NVMe Controllers 00:10:16.239 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:16.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:16.239 Initialization complete. Launching workers. 00:10:16.239 ======================================================== 00:10:16.239 Latency(us) 00:10:16.239 Device Information : IOPS MiB/s Average min max 00:10:16.239 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33668.19 131.52 3800.96 1179.28 9559.27 00:10:16.239 ======================================================== 00:10:16.239 Total : 33668.19 131.52 3800.96 1179.28 9559.27 00:10:16.239 00:10:16.239 [2024-07-12 11:47:05.283873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:16.239 11:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:16.239 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.239 [2024-07-12 11:47:05.516999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:21.504 Initializing NVMe Controllers 00:10:21.504 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:21.504 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:21.504 Initialization complete. Launching workers. 00:10:21.504 ======================================================== 00:10:21.504 Latency(us) 00:10:21.504 Device Information : IOPS MiB/s Average min max 00:10:21.504 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16012.48 62.55 7999.01 7743.87 15992.34 00:10:21.504 ======================================================== 00:10:21.504 Total : 16012.48 62.55 7999.01 7743.87 15992.34 00:10:21.504 00:10:21.504 [2024-07-12 11:47:10.557310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:21.504 11:47:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:21.504 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.504 [2024-07-12 11:47:10.770397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:26.785 [2024-07-12 11:47:15.842260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:26.785 Initializing NVMe Controllers 00:10:26.785 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:26.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:26.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:26.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:26.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:26.785 Initialization complete. Launching workers. 00:10:26.785 Starting thread on core 2 00:10:26.785 Starting thread on core 3 00:10:26.785 Starting thread on core 1 00:10:26.785 11:47:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:26.785 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.785 [2024-07-12 11:47:16.148214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:30.075 [2024-07-12 11:47:19.213710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:30.075 Initializing NVMe Controllers 00:10:30.075 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.075 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:30.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:30.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:30.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:30.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:30.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:30.075 Initialization complete. Launching workers. 00:10:30.075 Starting thread on core 1 with urgent priority queue 00:10:30.075 Starting thread on core 2 with urgent priority queue 00:10:30.076 Starting thread on core 3 with urgent priority queue 00:10:30.076 Starting thread on core 0 with urgent priority queue 00:10:30.076 SPDK bdev Controller (SPDK1 ) core 0: 4179.33 IO/s 23.93 secs/100000 ios 00:10:30.076 SPDK bdev Controller (SPDK1 ) core 1: 5232.00 IO/s 19.11 secs/100000 ios 00:10:30.076 SPDK bdev Controller (SPDK1 ) core 2: 5942.67 IO/s 16.83 secs/100000 ios 00:10:30.076 SPDK bdev Controller (SPDK1 ) core 3: 5720.67 IO/s 17.48 secs/100000 ios 00:10:30.076 ======================================================== 00:10:30.076 00:10:30.076 11:47:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:30.076 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.076 [2024-07-12 11:47:19.501373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:30.076 Initializing NVMe Controllers 00:10:30.076 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.076 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.076 Namespace ID: 1 size: 0GB 00:10:30.076 Initialization complete. 00:10:30.076 INFO: using host memory buffer for IO 00:10:30.076 Hello world! 00:10:30.076 [2024-07-12 11:47:19.534916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:30.335 11:47:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:30.335 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.335 [2024-07-12 11:47:19.824373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:31.713 Initializing NVMe Controllers 00:10:31.713 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:31.713 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:31.713 Initialization complete. Launching workers. 00:10:31.713 submit (in ns) avg, min, max = 7600.4, 3510.0, 4014932.2 00:10:31.713 complete (in ns) avg, min, max = 26086.2, 2078.9, 4014686.7 00:10:31.713 00:10:31.713 Submit histogram 00:10:31.713 ================ 00:10:31.713 Range in us Cumulative Count 00:10:31.713 3.508 - 3.532: 0.4889% ( 64) 00:10:31.713 3.532 - 3.556: 1.9555% ( 192) 00:10:31.713 3.556 - 3.579: 5.8590% ( 511) 00:10:31.713 3.579 - 3.603: 12.3826% ( 854) 00:10:31.713 3.603 - 3.627: 22.9241% ( 1380) 00:10:31.713 3.627 - 3.650: 33.0532% ( 1326) 00:10:31.713 3.650 - 3.674: 41.1046% ( 1054) 00:10:31.713 3.674 - 3.698: 47.7045% ( 864) 00:10:31.713 3.698 - 3.721: 54.2892% ( 862) 00:10:31.713 3.721 - 3.745: 59.6364% ( 700) 00:10:31.713 3.745 - 3.769: 64.0211% ( 574) 00:10:31.713 3.769 - 3.793: 67.1912% ( 415) 00:10:31.713 3.793 - 3.816: 70.0481% ( 374) 00:10:31.713 3.816 - 3.840: 73.2488% ( 419) 00:10:31.713 3.840 - 3.864: 76.9384% ( 483) 00:10:31.713 3.864 - 3.887: 80.8036% ( 506) 00:10:31.713 3.887 - 3.911: 83.6223% ( 369) 00:10:31.713 3.911 - 3.935: 86.0209% ( 314) 00:10:31.713 3.935 - 3.959: 87.9383% ( 251) 00:10:31.713 3.959 - 3.982: 89.7945% ( 243) 00:10:31.713 3.982 - 4.006: 91.4369% ( 215) 00:10:31.713 4.006 - 4.030: 92.5903% ( 151) 00:10:31.713 4.030 - 4.053: 93.5223% ( 122) 00:10:31.713 4.053 - 4.077: 94.2632% ( 97) 00:10:31.713 4.077 - 4.101: 94.9966% ( 96) 00:10:31.713 4.101 - 4.124: 95.4091% ( 54) 00:10:31.713 4.124 - 4.148: 95.7834% ( 49) 00:10:31.713 4.148 - 4.172: 96.0660% ( 37) 00:10:31.713 4.172 - 4.196: 96.1882% ( 16) 00:10:31.713 4.196 - 4.219: 96.3868% ( 26) 00:10:31.713 4.219 - 4.243: 96.4556% ( 9) 00:10:31.713 4.243 - 4.267: 96.5243% ( 9) 00:10:31.713 4.267 - 4.290: 96.6542% ( 17) 00:10:31.713 4.290 - 4.314: 96.7229% ( 9) 00:10:31.713 4.314 - 4.338: 96.7841% ( 8) 00:10:31.713 4.338 - 4.361: 96.8604% ( 10) 00:10:31.713 4.361 - 4.385: 96.8986% ( 5) 00:10:31.713 4.385 - 4.409: 96.9445% ( 6) 00:10:31.713 4.409 - 4.433: 96.9979% ( 7) 00:10:31.713 4.433 - 4.456: 97.0209% ( 3) 00:10:31.713 4.456 - 4.480: 97.0667% ( 6) 00:10:31.713 4.480 - 4.504: 97.0896% ( 3) 00:10:31.713 4.504 - 4.527: 97.1049% ( 2) 00:10:31.713 4.527 - 4.551: 97.1125% ( 1) 00:10:31.713 4.551 - 4.575: 97.1584% ( 6) 00:10:31.713 4.575 - 4.599: 97.1813% ( 3) 00:10:31.713 4.599 - 4.622: 97.1889% ( 1) 00:10:31.713 4.622 - 4.646: 97.2042% ( 2) 00:10:31.713 4.646 - 4.670: 97.2118% ( 1) 00:10:31.713 4.670 - 4.693: 97.2347% ( 3) 00:10:31.713 4.693 - 4.717: 97.2653% ( 4) 00:10:31.713 4.717 - 4.741: 97.2959% ( 4) 00:10:31.713 4.741 - 4.764: 97.3722% ( 10) 00:10:31.713 4.764 - 4.788: 97.3875% ( 2) 00:10:31.713 4.788 - 4.812: 97.4181% ( 4) 00:10:31.713 4.812 - 4.836: 97.4715% ( 7) 00:10:31.713 4.836 - 4.859: 97.5174% ( 6) 00:10:31.713 4.859 - 4.883: 97.5632% ( 6) 00:10:31.713 4.883 - 4.907: 97.5861% ( 3) 00:10:31.713 4.907 - 4.930: 97.6243% ( 5) 00:10:31.713 4.930 - 4.954: 97.6931% ( 9) 00:10:31.713 4.954 - 4.978: 97.7313% ( 5) 00:10:31.713 4.978 - 5.001: 97.7695% ( 5) 00:10:31.713 5.001 - 5.025: 97.7847% ( 2) 00:10:31.713 5.025 - 5.049: 97.8382% ( 7) 00:10:31.713 5.049 - 5.073: 97.8688% ( 4) 00:10:31.713 5.073 - 5.096: 97.8993% ( 4) 00:10:31.713 5.096 - 5.120: 97.9452% ( 6) 00:10:31.713 5.120 - 5.144: 97.9910% ( 6) 00:10:31.713 5.144 - 5.167: 98.0063% ( 2) 00:10:31.713 5.167 - 5.191: 98.0139% ( 1) 00:10:31.713 5.191 - 5.215: 98.0445% ( 4) 00:10:31.713 5.215 - 5.239: 98.0521% ( 1) 00:10:31.713 5.239 - 5.262: 98.0903% ( 5) 00:10:31.713 5.286 - 5.310: 98.0979% ( 1) 00:10:31.713 5.310 - 5.333: 98.1132% ( 2) 00:10:31.713 5.333 - 5.357: 98.1208% ( 1) 00:10:31.713 5.381 - 5.404: 98.1285% ( 1) 00:10:31.713 5.404 - 5.428: 98.1361% ( 1) 00:10:31.713 5.428 - 5.452: 98.1438% ( 1) 00:10:31.713 5.452 - 5.476: 98.1514% ( 1) 00:10:31.713 5.476 - 5.499: 98.1743% ( 3) 00:10:31.713 5.499 - 5.523: 98.1820% ( 1) 00:10:31.713 5.547 - 5.570: 98.1896% ( 1) 00:10:31.713 5.689 - 5.713: 98.1972% ( 1) 00:10:31.713 5.736 - 5.760: 98.2049% ( 1) 00:10:31.714 5.831 - 5.855: 98.2125% ( 1) 00:10:31.714 5.902 - 5.926: 98.2202% ( 1) 00:10:31.714 5.973 - 5.997: 98.2278% ( 1) 00:10:31.714 5.997 - 6.021: 98.2354% ( 1) 00:10:31.714 6.068 - 6.116: 98.2507% ( 2) 00:10:31.714 6.258 - 6.305: 98.2583% ( 1) 00:10:31.714 6.305 - 6.353: 98.2736% ( 2) 00:10:31.714 6.495 - 6.542: 98.2813% ( 1) 00:10:31.714 6.684 - 6.732: 98.2889% ( 1) 00:10:31.714 6.732 - 6.779: 98.3042% ( 2) 00:10:31.714 6.779 - 6.827: 98.3118% ( 1) 00:10:31.714 6.827 - 6.874: 98.3195% ( 1) 00:10:31.714 6.921 - 6.969: 98.3347% ( 2) 00:10:31.714 7.016 - 7.064: 98.3424% ( 1) 00:10:31.714 7.111 - 7.159: 98.3500% ( 1) 00:10:31.714 7.206 - 7.253: 98.3577% ( 1) 00:10:31.714 7.301 - 7.348: 98.3729% ( 2) 00:10:31.714 7.396 - 7.443: 98.3958% ( 3) 00:10:31.714 7.443 - 7.490: 98.4111% ( 2) 00:10:31.714 7.490 - 7.538: 98.4188% ( 1) 00:10:31.714 7.633 - 7.680: 98.4340% ( 2) 00:10:31.714 7.680 - 7.727: 98.4417% ( 1) 00:10:31.714 7.727 - 7.775: 98.4493% ( 1) 00:10:31.714 7.775 - 7.822: 98.4570% ( 1) 00:10:31.714 7.822 - 7.870: 98.4722% ( 2) 00:10:31.714 7.917 - 7.964: 98.4799% ( 1) 00:10:31.714 8.012 - 8.059: 98.4875% ( 1) 00:10:31.714 8.059 - 8.107: 98.5028% ( 2) 00:10:31.714 8.107 - 8.154: 98.5181% ( 2) 00:10:31.714 8.154 - 8.201: 98.5257% ( 1) 00:10:31.714 8.201 - 8.249: 98.5410% ( 2) 00:10:31.714 8.296 - 8.344: 98.5563% ( 2) 00:10:31.714 8.439 - 8.486: 98.5639% ( 1) 00:10:31.714 8.486 - 8.533: 98.5792% ( 2) 00:10:31.714 8.676 - 8.723: 98.5868% ( 1) 00:10:31.714 8.770 - 8.818: 98.5945% ( 1) 00:10:31.714 8.865 - 8.913: 98.6021% ( 1) 00:10:31.714 8.960 - 9.007: 98.6097% ( 1) 00:10:31.714 9.007 - 9.055: 98.6250% ( 2) 00:10:31.714 9.150 - 9.197: 98.6326% ( 1) 00:10:31.714 9.197 - 9.244: 98.6403% ( 1) 00:10:31.714 9.292 - 9.339: 98.6479% ( 1) 00:10:31.714 9.339 - 9.387: 98.6556% ( 1) 00:10:31.714 9.481 - 9.529: 98.6632% ( 1) 00:10:31.714 9.529 - 9.576: 98.6708% ( 1) 00:10:31.714 9.576 - 9.624: 98.6785% ( 1) 00:10:31.714 9.624 - 9.671: 98.6861% ( 1) 00:10:31.714 9.766 - 9.813: 98.6938% ( 1) 00:10:31.714 9.813 - 9.861: 98.7014% ( 1) 00:10:31.714 9.861 - 9.908: 98.7167% ( 2) 00:10:31.714 9.908 - 9.956: 98.7320% ( 2) 00:10:31.714 9.956 - 10.003: 98.7396% ( 1) 00:10:31.714 10.193 - 10.240: 98.7472% ( 1) 00:10:31.714 10.382 - 10.430: 98.7625% ( 2) 00:10:31.714 10.524 - 10.572: 98.7778% ( 2) 00:10:31.714 10.619 - 10.667: 98.7854% ( 1) 00:10:31.714 10.667 - 10.714: 98.7931% ( 1) 00:10:31.714 11.141 - 11.188: 98.8007% ( 1) 00:10:31.714 11.283 - 11.330: 98.8083% ( 1) 00:10:31.714 11.330 - 11.378: 98.8236% ( 2) 00:10:31.714 11.473 - 11.520: 98.8313% ( 1) 00:10:31.714 11.710 - 11.757: 98.8389% ( 1) 00:10:31.714 11.757 - 11.804: 98.8465% ( 1) 00:10:31.714 11.852 - 11.899: 98.8618% ( 2) 00:10:31.714 11.947 - 11.994: 98.8695% ( 1) 00:10:31.714 12.326 - 12.421: 98.8771% ( 1) 00:10:31.714 12.516 - 12.610: 98.8924% ( 2) 00:10:31.714 12.610 - 12.705: 98.9000% ( 1) 00:10:31.714 12.800 - 12.895: 98.9076% ( 1) 00:10:31.714 13.084 - 13.179: 98.9153% ( 1) 00:10:31.714 13.274 - 13.369: 98.9306% ( 2) 00:10:31.714 13.369 - 13.464: 98.9458% ( 2) 00:10:31.714 13.653 - 13.748: 98.9535% ( 1) 00:10:31.714 13.748 - 13.843: 98.9611% ( 1) 00:10:31.714 13.843 - 13.938: 98.9764% ( 2) 00:10:31.714 13.938 - 14.033: 98.9917% ( 2) 00:10:31.714 14.127 - 14.222: 98.9993% ( 1) 00:10:31.714 14.222 - 14.317: 99.0070% ( 1) 00:10:31.714 14.412 - 14.507: 99.0146% ( 1) 00:10:31.714 14.601 - 14.696: 99.0222% ( 1) 00:10:31.714 14.791 - 14.886: 99.0299% ( 1) 00:10:31.714 14.886 - 14.981: 99.0375% ( 1) 00:10:31.714 14.981 - 15.076: 99.0451% ( 1) 00:10:31.714 15.455 - 15.550: 99.0528% ( 1) 00:10:31.714 16.972 - 17.067: 99.0604% ( 1) 00:10:31.714 17.067 - 17.161: 99.0681% ( 1) 00:10:31.714 17.256 - 17.351: 99.0757% ( 1) 00:10:31.714 17.351 - 17.446: 99.0910% ( 2) 00:10:31.714 17.446 - 17.541: 99.1063% ( 2) 00:10:31.714 17.541 - 17.636: 99.1139% ( 1) 00:10:31.714 17.636 - 17.730: 99.1445% ( 4) 00:10:31.714 17.730 - 17.825: 99.1979% ( 7) 00:10:31.714 17.825 - 17.920: 99.2361% ( 5) 00:10:31.714 17.920 - 18.015: 99.2819% ( 6) 00:10:31.714 18.015 - 18.110: 99.3125% ( 4) 00:10:31.714 18.110 - 18.204: 99.3660% ( 7) 00:10:31.714 18.204 - 18.299: 99.4882% ( 16) 00:10:31.714 18.299 - 18.394: 99.5264% ( 5) 00:10:31.714 18.394 - 18.489: 99.5875% ( 8) 00:10:31.714 18.489 - 18.584: 99.6104% ( 3) 00:10:31.714 18.584 - 18.679: 99.6715% ( 8) 00:10:31.714 18.679 - 18.773: 99.6944% ( 3) 00:10:31.714 18.773 - 18.868: 99.7021% ( 1) 00:10:31.714 18.868 - 18.963: 99.7097% ( 1) 00:10:31.714 18.963 - 19.058: 99.7326% ( 3) 00:10:31.714 19.058 - 19.153: 99.7403% ( 1) 00:10:31.714 19.153 - 19.247: 99.7708% ( 4) 00:10:31.714 19.247 - 19.342: 99.7785% ( 1) 00:10:31.714 19.532 - 19.627: 99.7938% ( 2) 00:10:31.714 19.627 - 19.721: 99.8090% ( 2) 00:10:31.714 19.721 - 19.816: 99.8167% ( 1) 00:10:31.714 19.816 - 19.911: 99.8243% ( 1) 00:10:31.714 20.006 - 20.101: 99.8319% ( 1) 00:10:31.714 20.290 - 20.385: 99.8396% ( 1) 00:10:31.714 21.049 - 21.144: 99.8472% ( 1) 00:10:31.714 21.902 - 21.997: 99.8549% ( 1) 00:10:31.714 22.187 - 22.281: 99.8625% ( 1) 00:10:31.714 22.566 - 22.661: 99.8701% ( 1) 00:10:31.714 22.945 - 23.040: 99.8778% ( 1) 00:10:31.714 24.083 - 24.178: 99.8854% ( 1) 00:10:31.714 28.065 - 28.255: 99.9007% ( 2) 00:10:31.714 28.444 - 28.634: 99.9083% ( 1) 00:10:31.714 3980.705 - 4004.978: 99.9924% ( 11) 00:10:31.714 4004.978 - 4029.250: 100.0000% ( 1) 00:10:31.714 00:10:31.714 Complete histogram 00:10:31.714 ================== 00:10:31.714 Range in us Cumulative Count 00:10:31.714 2.074 - 2.086: 1.2069% ( 158) 00:10:31.714 2.086 - 2.098: 12.2756% ( 1449) 00:10:31.714 2.098 - 2.110: 15.1707% ( 379) 00:10:31.714 2.110 - 2.121: 38.0108% ( 2990) 00:10:31.714 2.121 - 2.133: 55.9850% ( 2353) 00:10:31.714 2.133 - 2.145: 57.7420% ( 230) 00:10:31.714 2.145 - 2.157: 61.4926% ( 491) 00:10:31.714 2.157 - 2.169: 65.7245% ( 554) 00:10:31.714 2.169 - 2.181: 67.5197% ( 235) 00:10:31.714 2.181 - 2.193: 79.1001% ( 1516) 00:10:31.714 2.193 - 2.204: 85.4633% ( 833) 00:10:31.714 2.204 - 2.216: 86.1737% ( 93) 00:10:31.714 2.216 - 2.228: 87.2737% ( 144) 00:10:31.714 2.228 - 2.240: 88.0911% ( 107) 00:10:31.714 2.240 - 2.252: 88.3890% ( 39) 00:10:31.714 2.252 - 2.264: 90.8716% ( 325) 00:10:31.714 2.264 - 2.276: 93.3924% ( 330) 00:10:31.714 2.276 - 2.287: 93.8431% ( 59) 00:10:31.714 2.287 - 2.299: 94.1716% ( 43) 00:10:31.714 2.299 - 2.311: 94.3473% ( 23) 00:10:31.714 2.311 - 2.323: 94.5077% ( 21) 00:10:31.714 2.323 - 2.335: 94.8209% ( 41) 00:10:31.714 2.335 - 2.347: 95.2868% ( 61) 00:10:31.714 2.347 - 2.359: 95.4243% ( 18) 00:10:31.714 2.359 - 2.370: 95.4854% ( 8) 00:10:31.714 2.370 - 2.382: 95.5542% ( 9) 00:10:31.714 2.382 - 2.394: 95.6077% ( 7) 00:10:31.714 2.394 - 2.406: 95.8063% ( 26) 00:10:31.714 2.406 - 2.418: 96.0431% ( 31) 00:10:31.714 2.418 - 2.430: 96.2570% ( 28) 00:10:31.714 2.430 - 2.441: 96.5778% ( 42) 00:10:31.715 2.441 - 2.453: 96.7993% ( 29) 00:10:31.715 2.453 - 2.465: 97.0056% ( 27) 00:10:31.715 2.465 - 2.477: 97.1965% ( 25) 00:10:31.715 2.477 - 2.489: 97.3570% ( 21) 00:10:31.715 2.489 - 2.501: 97.5709% ( 28) 00:10:31.715 2.501 - 2.513: 97.7160% ( 19) 00:10:31.715 2.513 - 2.524: 97.7695% ( 7) 00:10:31.715 2.524 - 2.536: 97.8382% ( 9) 00:10:31.715 2.536 - 2.548: 97.8840% ( 6) 00:10:31.715 2.548 - 2.560: 97.9222% ( 5) 00:10:31.715 2.560 - 2.572: 97.9452% ( 3) 00:10:31.715 2.572 - 2.584: 97.9833% ( 5) 00:10:31.715 2.584 - 2.596: 98.0139% ( 4) 00:10:31.715 2.596 - 2.607: 98.0368% ( 3) 00:10:31.715 2.607 - 2.619: 98.0521% ( 2) 00:10:31.715 2.619 - 2.631: 98.0674% ( 2) 00:10:31.715 2.631 - 2.643: 98.0903% ( 3) 00:10:31.715 2.643 - 2.655: 98.1056% ( 2) 00:10:31.715 2.655 - 2.667: 98.1285% ( 3) 00:10:31.715 2.667 - 2.679: 98.1514% ( 3) 00:10:31.715 2.679 - 2.690: 98.1743% ( 3) 00:10:31.715 2.714 - 2.726: 98.2049% ( 4) 00:10:31.715 2.726 - 2.738: 98.2278% ( 3) 00:10:31.715 2.738 - 2.750: 98.2354% ( 1) 00:10:31.715 2.750 - 2.761: 98.2431% ( 1) 00:10:31.715 2.761 - 2.773: 98.2660% ( 3) 00:10:31.715 2.773 - 2.785: 98.2813% ( 2) 00:10:31.715 2.785 - 2.797: 98.2965% ( 2) 00:10:31.715 2.809 - 2.821: 98.3118% ( 2) 00:10:31.715 2.821 - 2.833: 98.3195% ( 1) 00:10:31.715 2.844 - 2.856: 98.3271% ( 1) 00:10:31.715 2.868 - 2.880: 98.3424% ( 2) 00:10:31.715 2.892 - 2.904: 98.3500% ( 1) 00:10:31.715 2.916 - 2.927: 98.3577% ( 1) 00:10:31.715 2.927 - 2.939: 98.3653% ( 1) 00:10:31.715 2.963 - 2.975: 98.3729% ( 1) 00:10:31.715 2.987 - 2.999: 98.3806% ( 1) 00:10:31.715 3.022 - 3.034: 98.3882% ( 1) 00:10:31.715 3.034 - 3.058: 98.4111% ( 3) 00:10:31.715 3.058 - 3.081: 98.4188% ( 1) 00:10:31.715 3.081 - 3.105: 98.4264% ( 1) 00:10:31.715 3.129 - 3.153: 98.4340% ( 1) 00:10:31.715 3.153 - 3.176: 98.4417% ( 1) 00:10:31.715 3.200 - 3.224: 98.4493% ( 1) 00:10:31.715 3.295 - 3.319: 98.4570% ( 1) 00:10:31.715 3.319 - 3.342: 98.4646% ( 1) 00:10:31.715 3.342 - 3.366: 98.4722% ( 1) 00:10:31.715 3.366 - 3.390: 98.4799% ( 1) 00:10:31.715 3.390 - 3.413: 98.4875% ( 1) 00:10:31.715 3.413 - 3.437: 98.5028% ( 2) 00:10:31.715 3.437 - 3.461: 98.5104% ( 1) 00:10:31.715 3.484 - 3.508: 98.5181% ( 1) 00:10:31.715 3.532 - 3.556: 98.5257% ( 1) 00:10:31.715 3.603 - 3.627: 98.5333% ( 1) 00:10:31.715 3.650 - 3.674: 98.5410% ( 1) 00:10:31.715 3.674 - 3.698: 98.5639% ( 3) 00:10:31.715 3.698 - 3.721: 98.5715% ( 1) 00:10:31.715 3.721 - 3.745: 98.5792% ( 1) 00:10:31.715 3.840 - 3.864: 98.5868% ( 1) 00:10:31.715 3.864 - 3.887: 98.5945% ( 1) 00:10:31.715 3.887 - 3.911: 98.6097% ( 2) 00:10:31.715 4.006 - 4.030: 98.6174% ( 1) 00:10:31.715 4.030 - 4.053: 98.6250% ( 1) 00:10:31.715 4.077 - 4.101: 98.6326% ( 1) 00:10:31.715 4.101 - 4.124: 98.6403% ( 1) 00:10:31.715 4.361 - 4.385: 98.6479% ( 1) 00:10:31.715 5.167 - 5.191: 98.6556% ( 1) 00:10:31.715 5.310 - 5.333: 98.6708% ( 2) 00:10:31.715 5.452 - 5.476: 98.6785% ( 1) 00:10:31.715 5.618 - 5.641: 98.6861% ( 1) 00:10:31.715 5.855 - 5.879: 98.6938% ( 1) 00:10:31.715 6.044 - 6.068: 98.7090% ( 2) 00:10:31.715 6.068 - 6.116: 98.7167% ( 1) 00:10:31.715 6.874 - 6.921: 98.7243% ( 1) 00:10:31.715 6.921 - 6.969: 98.7320% ( 1) 00:10:31.715 7.016 - 7.064: 98.7396% ( 1) 00:10:31.715 7.727 - 7.775: 98.7472% ( 1) 00:10:31.715 7.964 - 8.012: 98.7549% ( 1) 00:10:31.715 8.391 - 8.439: 98.7625% ( 1) 00:10:31.715 11.378 - 11.425: 98.7701% ( 1) 00:10:31.715 13.464 - 13.559: 98.7778% ( 1) 00:10:31.715 15.455 - 15.550: 98.7854% ( 1) 00:10:31.715 15.550 - 15.644: 98.8007% ( 2) 00:10:31.715 15.644 - 15.739: 98.8160% ( 2) 00:10:31.715 15.739 - 15.834: 98.8236% ( 1) 00:10:31.715 15.834 - 15.929: 98.8313% ( 1) 00:10:31.715 15.929 - 16.024: 98.8389% ( 1) 00:10:31.715 16.024 - 16.119: 98.8847% ( 6) 00:10:31.715 16.119 - 16.213: 98.9153% ( 4) 00:10:31.715 16.213 - 16.308: 98.9382% ( 3) 00:10:31.715 16.308 - 16.403: 98.9458% ( 1) 00:10:31.715 16.403 - 16.498: 98.9840% ( 5) 00:10:31.715 16.498 - 16.593: 99.0451% ( 8) 00:10:31.715 16.593 - 16.687: 99.1292% ( 11) 00:10:31.715 16.687 - 16.782: 99.1521% ( 3) 00:10:31.715 16.782 - 16.877: 99.2056% ( 7) 00:10:31.715 16.877 - 16.972: 99.2438% ( 5) 00:10:31.715 16.972 - 17.067: 99.2514%[2024-07-12 11:47:20.847592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:31.715 ( 1) 00:10:31.715 17.067 - 17.161: 99.2667% ( 2) 00:10:31.715 17.161 - 17.256: 99.2896% ( 3) 00:10:31.715 17.256 - 17.351: 99.2972% ( 1) 00:10:31.715 17.351 - 17.446: 99.3125% ( 2) 00:10:31.715 17.446 - 17.541: 99.3201% ( 1) 00:10:31.715 17.541 - 17.636: 99.3354% ( 2) 00:10:31.715 17.730 - 17.825: 99.3507% ( 2) 00:10:31.715 18.015 - 18.110: 99.3660% ( 2) 00:10:31.715 18.110 - 18.204: 99.3736% ( 1) 00:10:31.715 18.299 - 18.394: 99.3813% ( 1) 00:10:31.715 18.394 - 18.489: 99.3889% ( 1) 00:10:31.715 19.627 - 19.721: 99.3965% ( 1) 00:10:31.715 21.997 - 22.092: 99.4042% ( 1) 00:10:31.715 3980.705 - 4004.978: 99.9465% ( 71) 00:10:31.715 4004.978 - 4029.250: 100.0000% ( 7) 00:10:31.715 00:10:31.715 11:47:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:31.715 11:47:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:31.715 11:47:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:31.715 11:47:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:31.715 11:47:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:31.715 [ 00:10:31.715 { 00:10:31.715 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:31.715 "subtype": "Discovery", 00:10:31.715 "listen_addresses": [], 00:10:31.715 "allow_any_host": true, 00:10:31.715 "hosts": [] 00:10:31.715 }, 00:10:31.715 { 00:10:31.715 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:31.715 "subtype": "NVMe", 00:10:31.715 "listen_addresses": [ 00:10:31.715 { 00:10:31.715 "trtype": "VFIOUSER", 00:10:31.715 "adrfam": "IPv4", 00:10:31.715 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:31.715 "trsvcid": "0" 00:10:31.715 } 00:10:31.715 ], 00:10:31.715 "allow_any_host": true, 00:10:31.715 "hosts": [], 00:10:31.715 "serial_number": "SPDK1", 00:10:31.715 "model_number": "SPDK bdev Controller", 00:10:31.715 "max_namespaces": 32, 00:10:31.715 "min_cntlid": 1, 00:10:31.715 "max_cntlid": 65519, 00:10:31.715 "namespaces": [ 00:10:31.715 { 00:10:31.715 "nsid": 1, 00:10:31.715 "bdev_name": "Malloc1", 00:10:31.715 "name": "Malloc1", 00:10:31.715 "nguid": "DF9B9183405A4B7FA20724585424C2B4", 00:10:31.715 "uuid": "df9b9183-405a-4b7f-a207-24585424c2b4" 00:10:31.715 } 00:10:31.715 ] 00:10:31.715 }, 00:10:31.715 { 00:10:31.715 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:31.715 "subtype": "NVMe", 00:10:31.715 "listen_addresses": [ 00:10:31.715 { 00:10:31.715 "trtype": "VFIOUSER", 00:10:31.715 "adrfam": "IPv4", 00:10:31.715 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:31.715 "trsvcid": "0" 00:10:31.715 } 00:10:31.715 ], 00:10:31.715 "allow_any_host": true, 00:10:31.715 "hosts": [], 00:10:31.715 "serial_number": "SPDK2", 00:10:31.715 "model_number": "SPDK bdev Controller", 00:10:31.715 "max_namespaces": 32, 00:10:31.715 "min_cntlid": 1, 00:10:31.715 "max_cntlid": 65519, 00:10:31.715 "namespaces": [ 00:10:31.715 { 00:10:31.715 "nsid": 1, 00:10:31.715 "bdev_name": "Malloc2", 00:10:31.715 "name": "Malloc2", 00:10:31.715 "nguid": "B143040D30CA473BA1EF3698F2FCC544", 00:10:31.715 "uuid": "b143040d-30ca-473b-a1ef-3698f2fcc544" 00:10:31.715 } 00:10:31.715 ] 00:10:31.715 } 00:10:31.715 ] 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=871259 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:31.715 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:31.973 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.973 [2024-07-12 11:47:21.330327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:31.973 Malloc3 00:10:31.973 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:32.231 [2024-07-12 11:47:21.687960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:32.231 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:32.489 Asynchronous Event Request test 00:10:32.489 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.489 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.489 Registering asynchronous event callbacks... 00:10:32.489 Starting namespace attribute notice tests for all controllers... 00:10:32.489 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:32.489 aer_cb - Changed Namespace 00:10:32.489 Cleaning up... 00:10:32.489 [ 00:10:32.489 { 00:10:32.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:32.489 "subtype": "Discovery", 00:10:32.489 "listen_addresses": [], 00:10:32.489 "allow_any_host": true, 00:10:32.489 "hosts": [] 00:10:32.489 }, 00:10:32.489 { 00:10:32.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:32.489 "subtype": "NVMe", 00:10:32.489 "listen_addresses": [ 00:10:32.489 { 00:10:32.489 "trtype": "VFIOUSER", 00:10:32.489 "adrfam": "IPv4", 00:10:32.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:32.489 "trsvcid": "0" 00:10:32.489 } 00:10:32.489 ], 00:10:32.489 "allow_any_host": true, 00:10:32.489 "hosts": [], 00:10:32.489 "serial_number": "SPDK1", 00:10:32.489 "model_number": "SPDK bdev Controller", 00:10:32.489 "max_namespaces": 32, 00:10:32.489 "min_cntlid": 1, 00:10:32.489 "max_cntlid": 65519, 00:10:32.489 "namespaces": [ 00:10:32.489 { 00:10:32.489 "nsid": 1, 00:10:32.489 "bdev_name": "Malloc1", 00:10:32.489 "name": "Malloc1", 00:10:32.489 "nguid": "DF9B9183405A4B7FA20724585424C2B4", 00:10:32.489 "uuid": "df9b9183-405a-4b7f-a207-24585424c2b4" 00:10:32.489 }, 00:10:32.489 { 00:10:32.489 "nsid": 2, 00:10:32.490 "bdev_name": "Malloc3", 00:10:32.490 "name": "Malloc3", 00:10:32.490 "nguid": "9033E32F29B6447BA33906B6534CC5BD", 00:10:32.490 "uuid": "9033e32f-29b6-447b-a339-06b6534cc5bd" 00:10:32.490 } 00:10:32.490 ] 00:10:32.490 }, 00:10:32.490 { 00:10:32.490 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:32.490 "subtype": "NVMe", 00:10:32.490 "listen_addresses": [ 00:10:32.490 { 00:10:32.490 "trtype": "VFIOUSER", 00:10:32.490 "adrfam": "IPv4", 00:10:32.490 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:32.490 "trsvcid": "0" 00:10:32.490 } 00:10:32.490 ], 00:10:32.490 "allow_any_host": true, 00:10:32.490 "hosts": [], 00:10:32.490 "serial_number": "SPDK2", 00:10:32.490 "model_number": "SPDK bdev Controller", 00:10:32.490 "max_namespaces": 32, 00:10:32.490 "min_cntlid": 1, 00:10:32.490 "max_cntlid": 65519, 00:10:32.490 "namespaces": [ 00:10:32.490 { 00:10:32.490 "nsid": 1, 00:10:32.490 "bdev_name": "Malloc2", 00:10:32.490 "name": "Malloc2", 00:10:32.490 "nguid": "B143040D30CA473BA1EF3698F2FCC544", 00:10:32.490 "uuid": "b143040d-30ca-473b-a1ef-3698f2fcc544" 00:10:32.490 } 00:10:32.490 ] 00:10:32.490 } 00:10:32.490 ] 00:10:32.490 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 871259 00:10:32.490 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:32.490 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:32.490 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:32.749 11:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:32.749 [2024-07-12 11:47:21.996743] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:10:32.749 [2024-07-12 11:47:21.996778] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871391 ] 00:10:32.749 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.749 [2024-07-12 11:47:22.029025] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:32.749 [2024-07-12 11:47:22.034348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:32.749 [2024-07-12 11:47:22.034377] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f040e6e9000 00:10:32.749 [2024-07-12 11:47:22.035346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.036353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.037357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.038362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.039372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.040375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.041381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.042389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.750 [2024-07-12 11:47:22.043405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:32.750 [2024-07-12 11:47:22.043430] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f040e6de000 00:10:32.750 [2024-07-12 11:47:22.044545] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:32.750 [2024-07-12 11:47:22.059241] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:32.750 [2024-07-12 11:47:22.059273] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:32.750 [2024-07-12 11:47:22.061368] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:32.750 [2024-07-12 11:47:22.061420] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:32.750 [2024-07-12 11:47:22.061503] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:32.750 [2024-07-12 11:47:22.061527] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:32.750 [2024-07-12 11:47:22.061537] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:32.750 [2024-07-12 11:47:22.062375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:32.750 [2024-07-12 11:47:22.062394] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:32.750 [2024-07-12 11:47:22.062407] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:32.750 [2024-07-12 11:47:22.063379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:32.750 [2024-07-12 11:47:22.063399] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:32.750 [2024-07-12 11:47:22.063412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.064382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:32.750 [2024-07-12 11:47:22.064401] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.065388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:32.750 [2024-07-12 11:47:22.065408] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:32.750 [2024-07-12 11:47:22.065417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.065428] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.065537] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:32.750 [2024-07-12 11:47:22.065544] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.065552] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:32.750 [2024-07-12 11:47:22.066397] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:32.750 [2024-07-12 11:47:22.067407] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:32.750 [2024-07-12 11:47:22.068411] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:32.750 [2024-07-12 11:47:22.069406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:32.750 [2024-07-12 11:47:22.069478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:32.750 [2024-07-12 11:47:22.070423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:32.750 [2024-07-12 11:47:22.070443] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:32.750 [2024-07-12 11:47:22.070452] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.070475] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:32.750 [2024-07-12 11:47:22.070488] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.070508] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.750 [2024-07-12 11:47:22.070517] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.750 [2024-07-12 11:47:22.070534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.080890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.080913] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:32.750 [2024-07-12 11:47:22.080922] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:32.750 [2024-07-12 11:47:22.080930] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:32.750 [2024-07-12 11:47:22.080942] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:32.750 [2024-07-12 11:47:22.080953] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:32.750 [2024-07-12 11:47:22.080962] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:32.750 [2024-07-12 11:47:22.080970] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.080983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.080999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.088877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.088901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.750 [2024-07-12 11:47:22.088915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.750 [2024-07-12 11:47:22.088928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.750 [2024-07-12 11:47:22.088940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.750 [2024-07-12 11:47:22.088949] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.088964] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.088979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.096896] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:32.750 [2024-07-12 11:47:22.096905] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.096917] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.096926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.096940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.104877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.104941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.104958] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.104972] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:32.750 [2024-07-12 11:47:22.104981] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:32.750 [2024-07-12 11:47:22.104991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.112878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.112902] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:32.750 [2024-07-12 11:47:22.112921] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.112936] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:32.750 [2024-07-12 11:47:22.112949] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.750 [2024-07-12 11:47:22.112958] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.750 [2024-07-12 11:47:22.112968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.750 [2024-07-12 11:47:22.120877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:32.750 [2024-07-12 11:47:22.120905] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.120922] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.120936] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.751 [2024-07-12 11:47:22.120944] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.751 [2024-07-12 11:47:22.120954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.128876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.128898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128911] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128936] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128952] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:32.751 [2024-07-12 11:47:22.128960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:32.751 [2024-07-12 11:47:22.128968] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:32.751 [2024-07-12 11:47:22.128994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.136876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.136902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.144875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.144905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.152890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.152925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.160879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.160905] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:32.751 [2024-07-12 11:47:22.160916] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:32.751 [2024-07-12 11:47:22.160922] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:32.751 [2024-07-12 11:47:22.160928] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:32.751 [2024-07-12 11:47:22.160939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:32.751 [2024-07-12 11:47:22.160951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:32.751 [2024-07-12 11:47:22.160959] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:32.751 [2024-07-12 11:47:22.160968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.160979] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:32.751 [2024-07-12 11:47:22.160988] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.751 [2024-07-12 11:47:22.160997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.161009] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:32.751 [2024-07-12 11:47:22.161017] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:32.751 [2024-07-12 11:47:22.161026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:32.751 [2024-07-12 11:47:22.168880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.168909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.168926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:32.751 [2024-07-12 11:47:22.168941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:32.751 ===================================================== 00:10:32.751 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:32.751 ===================================================== 00:10:32.751 Controller Capabilities/Features 00:10:32.751 ================================ 00:10:32.751 Vendor ID: 4e58 00:10:32.751 Subsystem Vendor ID: 4e58 00:10:32.751 Serial Number: SPDK2 00:10:32.751 Model Number: SPDK bdev Controller 00:10:32.751 Firmware Version: 24.09 00:10:32.751 Recommended Arb Burst: 6 00:10:32.751 IEEE OUI Identifier: 8d 6b 50 00:10:32.751 Multi-path I/O 00:10:32.751 May have multiple subsystem ports: Yes 00:10:32.751 May have multiple controllers: Yes 00:10:32.751 Associated with SR-IOV VF: No 00:10:32.751 Max Data Transfer Size: 131072 00:10:32.751 Max Number of Namespaces: 32 00:10:32.751 Max Number of I/O Queues: 127 00:10:32.751 NVMe Specification Version (VS): 1.3 00:10:32.751 NVMe Specification Version (Identify): 1.3 00:10:32.751 Maximum Queue Entries: 256 00:10:32.751 Contiguous Queues Required: Yes 00:10:32.751 Arbitration Mechanisms Supported 00:10:32.751 Weighted Round Robin: Not Supported 00:10:32.751 Vendor Specific: Not Supported 00:10:32.751 Reset Timeout: 15000 ms 00:10:32.751 Doorbell Stride: 4 bytes 00:10:32.751 NVM Subsystem Reset: Not Supported 00:10:32.751 Command Sets Supported 00:10:32.751 NVM Command Set: Supported 00:10:32.751 Boot Partition: Not Supported 00:10:32.751 Memory Page Size Minimum: 4096 bytes 00:10:32.751 Memory Page Size Maximum: 4096 bytes 00:10:32.751 Persistent Memory Region: Not Supported 00:10:32.751 Optional Asynchronous Events Supported 00:10:32.751 Namespace Attribute Notices: Supported 00:10:32.751 Firmware Activation Notices: Not Supported 00:10:32.751 ANA Change Notices: Not Supported 00:10:32.751 PLE Aggregate Log Change Notices: Not Supported 00:10:32.751 LBA Status Info Alert Notices: Not Supported 00:10:32.751 EGE Aggregate Log Change Notices: Not Supported 00:10:32.751 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.751 Zone Descriptor Change Notices: Not Supported 00:10:32.751 Discovery Log Change Notices: Not Supported 00:10:32.751 Controller Attributes 00:10:32.751 128-bit Host Identifier: Supported 00:10:32.751 Non-Operational Permissive Mode: Not Supported 00:10:32.751 NVM Sets: Not Supported 00:10:32.751 Read Recovery Levels: Not Supported 00:10:32.751 Endurance Groups: Not Supported 00:10:32.751 Predictable Latency Mode: Not Supported 00:10:32.751 Traffic Based Keep ALive: Not Supported 00:10:32.751 Namespace Granularity: Not Supported 00:10:32.751 SQ Associations: Not Supported 00:10:32.751 UUID List: Not Supported 00:10:32.751 Multi-Domain Subsystem: Not Supported 00:10:32.751 Fixed Capacity Management: Not Supported 00:10:32.751 Variable Capacity Management: Not Supported 00:10:32.751 Delete Endurance Group: Not Supported 00:10:32.751 Delete NVM Set: Not Supported 00:10:32.751 Extended LBA Formats Supported: Not Supported 00:10:32.751 Flexible Data Placement Supported: Not Supported 00:10:32.751 00:10:32.751 Controller Memory Buffer Support 00:10:32.751 ================================ 00:10:32.751 Supported: No 00:10:32.751 00:10:32.751 Persistent Memory Region Support 00:10:32.751 ================================ 00:10:32.751 Supported: No 00:10:32.751 00:10:32.751 Admin Command Set Attributes 00:10:32.751 ============================ 00:10:32.751 Security Send/Receive: Not Supported 00:10:32.751 Format NVM: Not Supported 00:10:32.751 Firmware Activate/Download: Not Supported 00:10:32.751 Namespace Management: Not Supported 00:10:32.751 Device Self-Test: Not Supported 00:10:32.751 Directives: Not Supported 00:10:32.751 NVMe-MI: Not Supported 00:10:32.751 Virtualization Management: Not Supported 00:10:32.751 Doorbell Buffer Config: Not Supported 00:10:32.751 Get LBA Status Capability: Not Supported 00:10:32.751 Command & Feature Lockdown Capability: Not Supported 00:10:32.751 Abort Command Limit: 4 00:10:32.751 Async Event Request Limit: 4 00:10:32.751 Number of Firmware Slots: N/A 00:10:32.751 Firmware Slot 1 Read-Only: N/A 00:10:32.751 Firmware Activation Without Reset: N/A 00:10:32.751 Multiple Update Detection Support: N/A 00:10:32.751 Firmware Update Granularity: No Information Provided 00:10:32.751 Per-Namespace SMART Log: No 00:10:32.751 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.751 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:32.751 Command Effects Log Page: Supported 00:10:32.751 Get Log Page Extended Data: Supported 00:10:32.751 Telemetry Log Pages: Not Supported 00:10:32.751 Persistent Event Log Pages: Not Supported 00:10:32.751 Supported Log Pages Log Page: May Support 00:10:32.751 Commands Supported & Effects Log Page: Not Supported 00:10:32.751 Feature Identifiers & Effects Log Page:May Support 00:10:32.751 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.751 Data Area 4 for Telemetry Log: Not Supported 00:10:32.751 Error Log Page Entries Supported: 128 00:10:32.751 Keep Alive: Supported 00:10:32.751 Keep Alive Granularity: 10000 ms 00:10:32.751 00:10:32.751 NVM Command Set Attributes 00:10:32.751 ========================== 00:10:32.751 Submission Queue Entry Size 00:10:32.751 Max: 64 00:10:32.751 Min: 64 00:10:32.751 Completion Queue Entry Size 00:10:32.751 Max: 16 00:10:32.751 Min: 16 00:10:32.751 Number of Namespaces: 32 00:10:32.752 Compare Command: Supported 00:10:32.752 Write Uncorrectable Command: Not Supported 00:10:32.752 Dataset Management Command: Supported 00:10:32.752 Write Zeroes Command: Supported 00:10:32.752 Set Features Save Field: Not Supported 00:10:32.752 Reservations: Not Supported 00:10:32.752 Timestamp: Not Supported 00:10:32.752 Copy: Supported 00:10:32.752 Volatile Write Cache: Present 00:10:32.752 Atomic Write Unit (Normal): 1 00:10:32.752 Atomic Write Unit (PFail): 1 00:10:32.752 Atomic Compare & Write Unit: 1 00:10:32.752 Fused Compare & Write: Supported 00:10:32.752 Scatter-Gather List 00:10:32.752 SGL Command Set: Supported (Dword aligned) 00:10:32.752 SGL Keyed: Not Supported 00:10:32.752 SGL Bit Bucket Descriptor: Not Supported 00:10:32.752 SGL Metadata Pointer: Not Supported 00:10:32.752 Oversized SGL: Not Supported 00:10:32.752 SGL Metadata Address: Not Supported 00:10:32.752 SGL Offset: Not Supported 00:10:32.752 Transport SGL Data Block: Not Supported 00:10:32.752 Replay Protected Memory Block: Not Supported 00:10:32.752 00:10:32.752 Firmware Slot Information 00:10:32.752 ========================= 00:10:32.752 Active slot: 1 00:10:32.752 Slot 1 Firmware Revision: 24.09 00:10:32.752 00:10:32.752 00:10:32.752 Commands Supported and Effects 00:10:32.752 ============================== 00:10:32.752 Admin Commands 00:10:32.752 -------------- 00:10:32.752 Get Log Page (02h): Supported 00:10:32.752 Identify (06h): Supported 00:10:32.752 Abort (08h): Supported 00:10:32.752 Set Features (09h): Supported 00:10:32.752 Get Features (0Ah): Supported 00:10:32.752 Asynchronous Event Request (0Ch): Supported 00:10:32.752 Keep Alive (18h): Supported 00:10:32.752 I/O Commands 00:10:32.752 ------------ 00:10:32.752 Flush (00h): Supported LBA-Change 00:10:32.752 Write (01h): Supported LBA-Change 00:10:32.752 Read (02h): Supported 00:10:32.752 Compare (05h): Supported 00:10:32.752 Write Zeroes (08h): Supported LBA-Change 00:10:32.752 Dataset Management (09h): Supported LBA-Change 00:10:32.752 Copy (19h): Supported LBA-Change 00:10:32.752 Unknown (79h): Supported LBA-Change 00:10:32.752 Unknown (7Ah): Supported 00:10:32.752 00:10:32.752 Error Log 00:10:32.752 ========= 00:10:32.752 00:10:32.752 Arbitration 00:10:32.752 =========== 00:10:32.752 Arbitration Burst: 1 00:10:32.752 00:10:32.752 Power Management 00:10:32.752 ================ 00:10:32.752 Number of Power States: 1 00:10:32.752 Current Power State: Power State #0 00:10:32.752 Power State #0: 00:10:32.752 Max Power: 0.00 W 00:10:32.752 Non-Operational State: Operational 00:10:32.752 Entry Latency: Not Reported 00:10:32.752 Exit Latency: Not Reported 00:10:32.752 Relative Read Throughput: 0 00:10:32.752 Relative Read Latency: 0 00:10:32.752 Relative Write Throughput: 0 00:10:32.752 Relative Write Latency: 0 00:10:32.752 Idle Power: Not Reported 00:10:32.752 Active Power: Not Reported 00:10:32.752 Non-Operational Permissive Mode: Not Supported 00:10:32.752 00:10:32.752 Health Information 00:10:32.752 ================== 00:10:32.752 Critical Warnings: 00:10:32.752 Available Spare Space: OK 00:10:32.752 Temperature: OK 00:10:32.752 Device Reliability: OK 00:10:32.752 Read Only: No 00:10:32.752 Volatile Memory Backup: OK 00:10:32.752 Current Temperature: 0 Kelvin (-2[2024-07-12 11:47:22.169059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:32.752 [2024-07-12 11:47:22.176878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:32.752 [2024-07-12 11:47:22.176921] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:32.752 [2024-07-12 11:47:22.176938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.752 [2024-07-12 11:47:22.176950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.752 [2024-07-12 11:47:22.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.752 [2024-07-12 11:47:22.176974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.752 [2024-07-12 11:47:22.177054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:32.752 [2024-07-12 11:47:22.177075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:32.752 [2024-07-12 11:47:22.178055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:32.752 [2024-07-12 11:47:22.178125] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:32.752 [2024-07-12 11:47:22.178139] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:32.752 [2024-07-12 11:47:22.179062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:32.752 [2024-07-12 11:47:22.179086] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:32.752 [2024-07-12 11:47:22.179137] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:32.752 [2024-07-12 11:47:22.180330] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:32.752 73 Celsius) 00:10:32.752 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:32.752 Available Spare: 0% 00:10:32.752 Available Spare Threshold: 0% 00:10:32.752 Life Percentage Used: 0% 00:10:32.752 Data Units Read: 0 00:10:32.752 Data Units Written: 0 00:10:32.752 Host Read Commands: 0 00:10:32.752 Host Write Commands: 0 00:10:32.752 Controller Busy Time: 0 minutes 00:10:32.752 Power Cycles: 0 00:10:32.752 Power On Hours: 0 hours 00:10:32.752 Unsafe Shutdowns: 0 00:10:32.752 Unrecoverable Media Errors: 0 00:10:32.752 Lifetime Error Log Entries: 0 00:10:32.752 Warning Temperature Time: 0 minutes 00:10:32.752 Critical Temperature Time: 0 minutes 00:10:32.752 00:10:32.752 Number of Queues 00:10:32.752 ================ 00:10:32.752 Number of I/O Submission Queues: 127 00:10:32.752 Number of I/O Completion Queues: 127 00:10:32.752 00:10:32.752 Active Namespaces 00:10:32.752 ================= 00:10:32.752 Namespace ID:1 00:10:32.752 Error Recovery Timeout: Unlimited 00:10:32.752 Command Set Identifier: NVM (00h) 00:10:32.752 Deallocate: Supported 00:10:32.752 Deallocated/Unwritten Error: Not Supported 00:10:32.752 Deallocated Read Value: Unknown 00:10:32.752 Deallocate in Write Zeroes: Not Supported 00:10:32.752 Deallocated Guard Field: 0xFFFF 00:10:32.752 Flush: Supported 00:10:32.752 Reservation: Supported 00:10:32.752 Namespace Sharing Capabilities: Multiple Controllers 00:10:32.752 Size (in LBAs): 131072 (0GiB) 00:10:32.752 Capacity (in LBAs): 131072 (0GiB) 00:10:32.752 Utilization (in LBAs): 131072 (0GiB) 00:10:32.752 NGUID: B143040D30CA473BA1EF3698F2FCC544 00:10:32.752 UUID: b143040d-30ca-473b-a1ef-3698f2fcc544 00:10:32.752 Thin Provisioning: Not Supported 00:10:32.752 Per-NS Atomic Units: Yes 00:10:32.752 Atomic Boundary Size (Normal): 0 00:10:32.752 Atomic Boundary Size (PFail): 0 00:10:32.752 Atomic Boundary Offset: 0 00:10:32.752 Maximum Single Source Range Length: 65535 00:10:32.752 Maximum Copy Length: 65535 00:10:32.752 Maximum Source Range Count: 1 00:10:32.752 NGUID/EUI64 Never Reused: No 00:10:32.752 Namespace Write Protected: No 00:10:32.752 Number of LBA Formats: 1 00:10:32.752 Current LBA Format: LBA Format #00 00:10:32.752 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.752 00:10:32.752 11:47:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:33.013 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.013 [2024-07-12 11:47:22.405623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:38.291 Initializing NVMe Controllers 00:10:38.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:38.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:38.291 Initialization complete. Launching workers. 00:10:38.291 ======================================================== 00:10:38.291 Latency(us) 00:10:38.291 Device Information : IOPS MiB/s Average min max 00:10:38.291 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34444.93 134.55 3715.36 1168.29 7329.80 00:10:38.291 ======================================================== 00:10:38.291 Total : 34444.93 134.55 3715.36 1168.29 7329.80 00:10:38.291 00:10:38.291 [2024-07-12 11:47:27.508219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:38.291 11:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:38.291 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.291 [2024-07-12 11:47:27.740872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:43.561 Initializing NVMe Controllers 00:10:43.561 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:43.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:43.561 Initialization complete. Launching workers. 00:10:43.561 ======================================================== 00:10:43.561 Latency(us) 00:10:43.561 Device Information : IOPS MiB/s Average min max 00:10:43.561 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32315.35 126.23 3960.27 1201.85 7856.18 00:10:43.561 ======================================================== 00:10:43.561 Total : 32315.35 126.23 3960.27 1201.85 7856.18 00:10:43.561 00:10:43.561 [2024-07-12 11:47:32.762344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:43.561 11:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:43.561 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.561 [2024-07-12 11:47:32.975169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:48.863 [2024-07-12 11:47:38.109012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:48.863 Initializing NVMe Controllers 00:10:48.863 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:48.863 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:48.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:48.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:48.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:48.863 Initialization complete. Launching workers. 00:10:48.863 Starting thread on core 2 00:10:48.863 Starting thread on core 3 00:10:48.863 Starting thread on core 1 00:10:48.863 11:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:48.863 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.121 [2024-07-12 11:47:38.403353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:52.416 [2024-07-12 11:47:41.686263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:52.416 Initializing NVMe Controllers 00:10:52.416 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.416 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:52.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:52.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:52.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:52.416 Initialization complete. Launching workers. 00:10:52.416 Starting thread on core 1 with urgent priority queue 00:10:52.416 Starting thread on core 2 with urgent priority queue 00:10:52.416 Starting thread on core 3 with urgent priority queue 00:10:52.416 Starting thread on core 0 with urgent priority queue 00:10:52.416 SPDK bdev Controller (SPDK2 ) core 0: 4382.33 IO/s 22.82 secs/100000 ios 00:10:52.416 SPDK bdev Controller (SPDK2 ) core 1: 4811.00 IO/s 20.79 secs/100000 ios 00:10:52.416 SPDK bdev Controller (SPDK2 ) core 2: 4752.33 IO/s 21.04 secs/100000 ios 00:10:52.416 SPDK bdev Controller (SPDK2 ) core 3: 4790.33 IO/s 20.88 secs/100000 ios 00:10:52.416 ======================================================== 00:10:52.416 00:10:52.416 11:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:52.416 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.674 [2024-07-12 11:47:41.985352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:52.674 Initializing NVMe Controllers 00:10:52.674 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.674 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.674 Namespace ID: 1 size: 0GB 00:10:52.674 Initialization complete. 00:10:52.674 INFO: using host memory buffer for IO 00:10:52.674 Hello world! 00:10:52.674 [2024-07-12 11:47:41.994405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:52.674 11:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:52.674 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.934 [2024-07-12 11:47:42.289406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:54.314 Initializing NVMe Controllers 00:10:54.314 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.314 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.314 Initialization complete. Launching workers. 00:10:54.314 submit (in ns) avg, min, max = 6498.2, 3513.3, 4999833.3 00:10:54.314 complete (in ns) avg, min, max = 27671.8, 2082.2, 5996697.8 00:10:54.314 00:10:54.314 Submit histogram 00:10:54.314 ================ 00:10:54.314 Range in us Cumulative Count 00:10:54.314 3.508 - 3.532: 0.2239% ( 29) 00:10:54.314 3.532 - 3.556: 0.6638% ( 57) 00:10:54.314 3.556 - 3.579: 2.7017% ( 264) 00:10:54.314 3.579 - 3.603: 6.5149% ( 494) 00:10:54.314 3.603 - 3.627: 13.4465% ( 898) 00:10:54.314 3.627 - 3.650: 22.5627% ( 1181) 00:10:54.314 3.650 - 3.674: 33.5623% ( 1425) 00:10:54.314 3.674 - 3.698: 42.3389% ( 1137) 00:10:54.314 3.698 - 3.721: 49.5484% ( 934) 00:10:54.314 3.721 - 3.745: 53.3771% ( 496) 00:10:54.314 3.745 - 3.769: 57.6380% ( 552) 00:10:54.314 3.769 - 3.793: 61.4512% ( 494) 00:10:54.314 3.793 - 3.816: 64.9093% ( 448) 00:10:54.314 3.816 - 3.840: 68.2594% ( 434) 00:10:54.314 3.840 - 3.864: 72.0185% ( 487) 00:10:54.314 3.864 - 3.887: 76.2640% ( 550) 00:10:54.314 3.887 - 3.911: 80.3937% ( 535) 00:10:54.314 3.911 - 3.935: 83.9753% ( 464) 00:10:54.314 3.935 - 3.959: 86.1366% ( 280) 00:10:54.314 3.959 - 3.982: 88.0587% ( 249) 00:10:54.314 3.982 - 4.006: 89.9344% ( 243) 00:10:54.314 4.006 - 4.030: 91.0614% ( 146) 00:10:54.314 4.030 - 4.053: 92.1343% ( 139) 00:10:54.314 4.053 - 4.077: 93.1069% ( 126) 00:10:54.314 4.077 - 4.101: 93.9328% ( 107) 00:10:54.314 4.101 - 4.124: 94.6584% ( 94) 00:10:54.314 4.124 - 4.148: 95.2528% ( 77) 00:10:54.314 4.148 - 4.172: 95.6696% ( 54) 00:10:54.314 4.172 - 4.196: 96.0787% ( 53) 00:10:54.314 4.196 - 4.219: 96.3335% ( 33) 00:10:54.314 4.219 - 4.243: 96.5496% ( 28) 00:10:54.314 4.243 - 4.267: 96.6808% ( 17) 00:10:54.314 4.267 - 4.290: 96.7812% ( 13) 00:10:54.314 4.290 - 4.314: 96.8661% ( 11) 00:10:54.314 4.314 - 4.338: 96.9355% ( 9) 00:10:54.314 4.338 - 4.361: 97.0359% ( 13) 00:10:54.314 4.361 - 4.385: 97.0976% ( 8) 00:10:54.314 4.385 - 4.409: 97.1517% ( 7) 00:10:54.314 4.409 - 4.433: 97.2134% ( 8) 00:10:54.314 4.433 - 4.456: 97.2366% ( 3) 00:10:54.314 4.456 - 4.480: 97.2597% ( 3) 00:10:54.314 4.480 - 4.504: 97.2829% ( 3) 00:10:54.314 4.504 - 4.527: 97.3061% ( 3) 00:10:54.314 4.527 - 4.551: 97.3138% ( 1) 00:10:54.314 4.551 - 4.575: 97.3447% ( 4) 00:10:54.314 4.575 - 4.599: 97.3524% ( 1) 00:10:54.314 4.646 - 4.670: 97.3678% ( 2) 00:10:54.314 4.670 - 4.693: 97.3832% ( 2) 00:10:54.314 4.693 - 4.717: 97.4218% ( 5) 00:10:54.314 4.764 - 4.788: 97.4450% ( 3) 00:10:54.314 4.788 - 4.812: 97.4682% ( 3) 00:10:54.314 4.812 - 4.836: 97.5222% ( 7) 00:10:54.314 4.836 - 4.859: 97.5685% ( 6) 00:10:54.314 4.859 - 4.883: 97.5994% ( 4) 00:10:54.314 4.883 - 4.907: 97.6225% ( 3) 00:10:54.314 4.907 - 4.930: 97.6843% ( 8) 00:10:54.314 4.930 - 4.954: 97.7383% ( 7) 00:10:54.314 4.954 - 4.978: 97.7692% ( 4) 00:10:54.314 4.978 - 5.001: 97.8001% ( 4) 00:10:54.314 5.001 - 5.025: 97.8232% ( 3) 00:10:54.314 5.025 - 5.049: 97.8695% ( 6) 00:10:54.314 5.049 - 5.073: 97.9390% ( 9) 00:10:54.314 5.073 - 5.096: 98.0008% ( 8) 00:10:54.314 5.096 - 5.120: 98.0239% ( 3) 00:10:54.314 5.120 - 5.144: 98.0934% ( 9) 00:10:54.314 5.144 - 5.167: 98.1166% ( 3) 00:10:54.314 5.167 - 5.191: 98.1552% ( 5) 00:10:54.314 5.191 - 5.215: 98.1783% ( 3) 00:10:54.314 5.239 - 5.262: 98.2246% ( 6) 00:10:54.314 5.286 - 5.310: 98.2323% ( 1) 00:10:54.314 5.310 - 5.333: 98.2632% ( 4) 00:10:54.314 5.333 - 5.357: 98.2787% ( 2) 00:10:54.314 5.357 - 5.381: 98.2941% ( 2) 00:10:54.314 5.381 - 5.404: 98.3095% ( 2) 00:10:54.314 5.404 - 5.428: 98.3250% ( 2) 00:10:54.314 5.428 - 5.452: 98.3327% ( 1) 00:10:54.314 5.499 - 5.523: 98.3481% ( 2) 00:10:54.314 5.570 - 5.594: 98.3558% ( 1) 00:10:54.314 5.594 - 5.618: 98.3636% ( 1) 00:10:54.314 5.641 - 5.665: 98.3713% ( 1) 00:10:54.314 5.713 - 5.736: 98.3790% ( 1) 00:10:54.314 5.831 - 5.855: 98.3867% ( 1) 00:10:54.314 5.879 - 5.902: 98.3944% ( 1) 00:10:54.315 5.950 - 5.973: 98.4022% ( 1) 00:10:54.315 6.044 - 6.068: 98.4099% ( 1) 00:10:54.315 6.068 - 6.116: 98.4176% ( 1) 00:10:54.315 6.163 - 6.210: 98.4253% ( 1) 00:10:54.315 6.210 - 6.258: 98.4330% ( 1) 00:10:54.315 6.258 - 6.305: 98.4408% ( 1) 00:10:54.315 6.542 - 6.590: 98.4562% ( 2) 00:10:54.315 6.590 - 6.637: 98.4639% ( 1) 00:10:54.315 6.684 - 6.732: 98.4716% ( 1) 00:10:54.315 6.732 - 6.779: 98.4871% ( 2) 00:10:54.315 6.827 - 6.874: 98.4948% ( 1) 00:10:54.315 6.921 - 6.969: 98.5102% ( 2) 00:10:54.315 6.969 - 7.016: 98.5179% ( 1) 00:10:54.315 7.016 - 7.064: 98.5257% ( 1) 00:10:54.315 7.111 - 7.159: 98.5411% ( 2) 00:10:54.315 7.348 - 7.396: 98.5488% ( 1) 00:10:54.315 7.490 - 7.538: 98.5720% ( 3) 00:10:54.315 7.538 - 7.585: 98.5951% ( 3) 00:10:54.315 7.585 - 7.633: 98.6029% ( 1) 00:10:54.315 7.680 - 7.727: 98.6106% ( 1) 00:10:54.315 7.727 - 7.775: 98.6183% ( 1) 00:10:54.315 7.775 - 7.822: 98.6337% ( 2) 00:10:54.315 7.870 - 7.917: 98.6415% ( 1) 00:10:54.315 7.917 - 7.964: 98.6646% ( 3) 00:10:54.315 7.964 - 8.012: 98.6723% ( 1) 00:10:54.315 8.012 - 8.059: 98.6955% ( 3) 00:10:54.315 8.059 - 8.107: 98.7109% ( 2) 00:10:54.315 8.107 - 8.154: 98.7186% ( 1) 00:10:54.315 8.154 - 8.201: 98.7264% ( 1) 00:10:54.315 8.201 - 8.249: 98.7341% ( 1) 00:10:54.315 8.249 - 8.296: 98.7418% ( 1) 00:10:54.315 8.296 - 8.344: 98.7650% ( 3) 00:10:54.315 8.344 - 8.391: 98.7727% ( 1) 00:10:54.315 8.439 - 8.486: 98.7804% ( 1) 00:10:54.315 8.628 - 8.676: 98.7958% ( 2) 00:10:54.315 8.818 - 8.865: 98.8036% ( 1) 00:10:54.315 9.007 - 9.055: 98.8113% ( 1) 00:10:54.315 9.339 - 9.387: 98.8190% ( 1) 00:10:54.315 9.387 - 9.434: 98.8267% ( 1) 00:10:54.315 9.434 - 9.481: 98.8344% ( 1) 00:10:54.315 10.287 - 10.335: 98.8421% ( 1) 00:10:54.315 10.430 - 10.477: 98.8499% ( 1) 00:10:54.315 11.046 - 11.093: 98.8576% ( 1) 00:10:54.315 11.378 - 11.425: 98.8653% ( 1) 00:10:54.315 11.804 - 11.852: 98.8730% ( 1) 00:10:54.315 11.899 - 11.947: 98.8807% ( 1) 00:10:54.315 12.089 - 12.136: 98.8885% ( 1) 00:10:54.315 12.326 - 12.421: 98.8962% ( 1) 00:10:54.315 12.516 - 12.610: 98.9039% ( 1) 00:10:54.315 12.705 - 12.800: 98.9116% ( 1) 00:10:54.315 12.800 - 12.895: 98.9271% ( 2) 00:10:54.315 12.895 - 12.990: 98.9348% ( 1) 00:10:54.315 12.990 - 13.084: 98.9425% ( 1) 00:10:54.315 13.084 - 13.179: 98.9502% ( 1) 00:10:54.315 13.369 - 13.464: 98.9579% ( 1) 00:10:54.315 13.464 - 13.559: 98.9734% ( 2) 00:10:54.315 13.748 - 13.843: 98.9811% ( 1) 00:10:54.315 13.843 - 13.938: 98.9888% ( 1) 00:10:54.315 14.033 - 14.127: 99.0042% ( 2) 00:10:54.315 14.127 - 14.222: 99.0120% ( 1) 00:10:54.315 14.507 - 14.601: 99.0197% ( 1) 00:10:54.315 15.170 - 15.265: 99.0274% ( 1) 00:10:54.315 15.929 - 16.024: 99.0351% ( 1) 00:10:54.315 17.067 - 17.161: 99.0428% ( 1) 00:10:54.315 17.256 - 17.351: 99.0583% ( 2) 00:10:54.315 17.351 - 17.446: 99.0660% ( 1) 00:10:54.315 17.446 - 17.541: 99.0969% ( 4) 00:10:54.315 17.541 - 17.636: 99.1277% ( 4) 00:10:54.315 17.636 - 17.730: 99.1586% ( 4) 00:10:54.315 17.730 - 17.825: 99.1895% ( 4) 00:10:54.315 17.825 - 17.920: 99.2281% ( 5) 00:10:54.315 17.920 - 18.015: 99.2976% ( 9) 00:10:54.315 18.015 - 18.110: 99.3284% ( 4) 00:10:54.315 18.110 - 18.204: 99.3825% ( 7) 00:10:54.315 18.204 - 18.299: 99.4597% ( 10) 00:10:54.315 18.299 - 18.394: 99.5291% ( 9) 00:10:54.315 18.394 - 18.489: 99.6140% ( 11) 00:10:54.315 18.489 - 18.584: 99.6912% ( 10) 00:10:54.315 18.584 - 18.679: 99.7221% ( 4) 00:10:54.315 18.679 - 18.773: 99.7684% ( 6) 00:10:54.315 18.773 - 18.868: 99.8070% ( 5) 00:10:54.315 18.868 - 18.963: 99.8147% ( 1) 00:10:54.315 18.963 - 19.058: 99.8302% ( 2) 00:10:54.315 19.058 - 19.153: 99.8456% ( 2) 00:10:54.315 19.153 - 19.247: 99.8611% ( 2) 00:10:54.315 19.247 - 19.342: 99.8765% ( 2) 00:10:54.315 19.532 - 19.627: 99.8842% ( 1) 00:10:54.315 19.721 - 19.816: 99.8919% ( 1) 00:10:54.315 19.816 - 19.911: 99.9074% ( 2) 00:10:54.315 20.575 - 20.670: 99.9151% ( 1) 00:10:54.315 21.523 - 21.618: 99.9228% ( 1) 00:10:54.315 24.841 - 25.031: 99.9305% ( 1) 00:10:54.315 32.616 - 32.806: 99.9382% ( 1) 00:10:54.315 3980.705 - 4004.978: 99.9768% ( 5) 00:10:54.315 4004.978 - 4029.250: 99.9923% ( 2) 00:10:54.315 4975.881 - 5000.154: 100.0000% ( 1) 00:10:54.315 00:10:54.315 Complete histogram 00:10:54.315 ================== 00:10:54.315 Range in us Cumulative Count 00:10:54.315 2.074 - 2.086: 0.0463% ( 6) 00:10:54.315 2.086 - 2.098: 7.9429% ( 1023) 00:10:54.315 2.098 - 2.110: 21.0807% ( 1702) 00:10:54.315 2.110 - 2.121: 24.2841% ( 415) 00:10:54.315 2.121 - 2.133: 46.9394% ( 2935) 00:10:54.315 2.133 - 2.145: 57.6534% ( 1388) 00:10:54.315 2.145 - 2.157: 59.1046% ( 188) 00:10:54.315 2.157 - 2.169: 63.8055% ( 609) 00:10:54.315 2.169 - 2.181: 66.4531% ( 343) 00:10:54.315 2.181 - 2.193: 68.8460% ( 310) 00:10:54.315 2.193 - 2.204: 77.3369% ( 1100) 00:10:54.315 2.204 - 2.216: 80.3551% ( 391) 00:10:54.315 2.216 - 2.228: 81.0884% ( 95) 00:10:54.315 2.228 - 2.240: 82.6322% ( 200) 00:10:54.315 2.240 - 2.252: 83.9058% ( 165) 00:10:54.315 2.252 - 2.264: 84.8090% ( 117) 00:10:54.315 2.264 - 2.276: 88.6993% ( 504) 00:10:54.315 2.276 - 2.287: 91.4010% ( 350) 00:10:54.315 2.287 - 2.299: 92.1806% ( 101) 00:10:54.315 2.299 - 2.311: 92.9911% ( 105) 00:10:54.315 2.311 - 2.323: 93.3076% ( 41) 00:10:54.315 2.323 - 2.335: 93.5546% ( 32) 00:10:54.315 2.335 - 2.347: 94.0178% ( 60) 00:10:54.315 2.347 - 2.359: 94.5195% ( 65) 00:10:54.315 2.359 - 2.370: 94.8128% ( 38) 00:10:54.315 2.370 - 2.382: 94.9749% ( 21) 00:10:54.315 2.382 - 2.394: 95.1525% ( 23) 00:10:54.315 2.394 - 2.406: 95.2914% ( 18) 00:10:54.315 2.406 - 2.418: 95.5461% ( 33) 00:10:54.315 2.418 - 2.430: 95.8780% ( 43) 00:10:54.315 2.430 - 2.441: 96.1559% ( 36) 00:10:54.315 2.441 - 2.453: 96.4029% ( 32) 00:10:54.315 2.453 - 2.465: 96.6499% ( 32) 00:10:54.315 2.465 - 2.477: 96.8352% ( 24) 00:10:54.315 2.477 - 2.489: 97.0282% ( 25) 00:10:54.315 2.489 - 2.501: 97.1826% ( 20) 00:10:54.315 2.501 - 2.513: 97.3601% ( 23) 00:10:54.315 2.513 - 2.524: 97.4527% ( 12) 00:10:54.315 2.524 - 2.536: 97.5453% ( 12) 00:10:54.315 2.536 - 2.548: 97.6766% ( 17) 00:10:54.315 2.548 - 2.560: 97.7769% ( 13) 00:10:54.315 2.560 - 2.572: 97.8541% ( 10) 00:10:54.315 2.572 - 2.584: 97.9004% ( 6) 00:10:54.315 2.584 - 2.596: 97.9390% ( 5) 00:10:54.315 2.596 - 2.607: 97.9931% ( 7) 00:10:54.315 2.607 - 2.619: 98.0239% ( 4) 00:10:54.315 2.619 - 2.631: 98.0471% ( 3) 00:10:54.315 2.631 - 2.643: 98.1011% ( 7) 00:10:54.315 2.643 - 2.655: 98.1166% ( 2) 00:10:54.315 2.655 - 2.667: 98.1474% ( 4) 00:10:54.315 2.667 - 2.679: 98.1629% ( 2) 00:10:54.315 2.690 - 2.702: 98.2092% ( 6) 00:10:54.315 2.702 - 2.714: 98.2246% ( 2) 00:10:54.315 2.714 - 2.726: 98.2478% ( 3) 00:10:54.315 2.726 - 2.738: 98.2555% ( 1) 00:10:54.315 2.738 - 2.750: 98.2632% ( 1) 00:10:54.315 2.750 - 2.761: 98.2787% ( 2) 00:10:54.315 2.773 - 2.785: 98.2864% ( 1) 00:10:54.315 2.785 - 2.797: 98.3018% ( 2) 00:10:54.315 2.821 - 2.833: 98.3250% ( 3) 00:10:54.315 2.844 - 2.856: 98.3327% ( 1) 00:10:54.315 2.927 - 2.939: 98.3558% ( 3) 00:10:54.315 2.987 - 2.999: 98.3636% ( 1) 00:10:54.315 3.034 - 3.058: 98.3790% ( 2) 00:10:54.315 3.295 - 3.319: 98.3867% ( 1) 00:10:54.315 3.390 - 3.413: 98.3944% ( 1) 00:10:54.315 3.413 - 3.437: 98.4022% ( 1) 00:10:54.315 3.508 - 3.532: 98.4099% ( 1) 00:10:54.315 3.579 - 3.603: 98.4176% ( 1) 00:10:54.315 3.603 - 3.627: 98.4253% ( 1) 00:10:54.315 3.627 - 3.650: 98.4485% ( 3) 00:10:54.315 3.674 - 3.698: 98.4562% ( 1) 00:10:54.315 3.721 - 3.745: 9[2024-07-12 11:47:43.390707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:54.315 8.4639% ( 1) 00:10:54.315 3.745 - 3.769: 98.4794% ( 2) 00:10:54.315 3.769 - 3.793: 98.4948% ( 2) 00:10:54.315 3.816 - 3.840: 98.5102% ( 2) 00:10:54.315 3.840 - 3.864: 98.5179% ( 1) 00:10:54.315 3.887 - 3.911: 98.5257% ( 1) 00:10:54.315 3.911 - 3.935: 98.5411% ( 2) 00:10:54.315 3.935 - 3.959: 98.5488% ( 1) 00:10:54.315 3.959 - 3.982: 98.5565% ( 1) 00:10:54.315 3.982 - 4.006: 98.5643% ( 1) 00:10:54.315 4.006 - 4.030: 98.5720% ( 1) 00:10:54.315 4.077 - 4.101: 98.5797% ( 1) 00:10:54.315 4.101 - 4.124: 98.5874% ( 1) 00:10:54.315 4.314 - 4.338: 98.5951% ( 1) 00:10:54.315 4.480 - 4.504: 98.6029% ( 1) 00:10:54.315 4.764 - 4.788: 98.6106% ( 1) 00:10:54.315 4.907 - 4.930: 98.6183% ( 1) 00:10:54.315 5.096 - 5.120: 98.6337% ( 2) 00:10:54.315 5.167 - 5.191: 98.6569% ( 3) 00:10:54.315 5.239 - 5.262: 98.6646% ( 1) 00:10:54.315 5.807 - 5.831: 98.6723% ( 1) 00:10:54.315 5.855 - 5.879: 98.6800% ( 1) 00:10:54.315 5.879 - 5.902: 98.6878% ( 1) 00:10:54.315 5.926 - 5.950: 98.6955% ( 1) 00:10:54.316 5.950 - 5.973: 98.7032% ( 1) 00:10:54.316 6.044 - 6.068: 98.7109% ( 1) 00:10:54.316 6.068 - 6.116: 98.7264% ( 2) 00:10:54.316 6.116 - 6.163: 98.7341% ( 1) 00:10:54.316 6.210 - 6.258: 98.7418% ( 1) 00:10:54.316 6.258 - 6.305: 98.7572% ( 2) 00:10:54.316 6.305 - 6.353: 98.7650% ( 1) 00:10:54.316 6.400 - 6.447: 98.7804% ( 2) 00:10:54.316 6.542 - 6.590: 98.7958% ( 2) 00:10:54.316 6.590 - 6.637: 98.8036% ( 1) 00:10:54.316 6.637 - 6.684: 98.8190% ( 2) 00:10:54.316 6.827 - 6.874: 98.8267% ( 1) 00:10:54.316 8.344 - 8.391: 98.8344% ( 1) 00:10:54.316 10.999 - 11.046: 98.8421% ( 1) 00:10:54.316 15.550 - 15.644: 98.8499% ( 1) 00:10:54.316 15.644 - 15.739: 98.8576% ( 1) 00:10:54.316 15.739 - 15.834: 98.8653% ( 1) 00:10:54.316 15.834 - 15.929: 98.8807% ( 2) 00:10:54.316 15.929 - 16.024: 98.8962% ( 2) 00:10:54.316 16.024 - 16.119: 98.9039% ( 1) 00:10:54.316 16.119 - 16.213: 98.9348% ( 4) 00:10:54.316 16.213 - 16.308: 98.9579% ( 3) 00:10:54.316 16.308 - 16.403: 98.9734% ( 2) 00:10:54.316 16.403 - 16.498: 99.0042% ( 4) 00:10:54.316 16.498 - 16.593: 99.0660% ( 8) 00:10:54.316 16.593 - 16.687: 99.1200% ( 7) 00:10:54.316 16.687 - 16.782: 99.1663% ( 6) 00:10:54.316 16.782 - 16.877: 99.2049% ( 5) 00:10:54.316 16.877 - 16.972: 99.2358% ( 4) 00:10:54.316 16.972 - 17.067: 99.2590% ( 3) 00:10:54.316 17.541 - 17.636: 99.2667% ( 1) 00:10:54.316 17.825 - 17.920: 99.2744% ( 1) 00:10:54.316 18.015 - 18.110: 99.2898% ( 2) 00:10:54.316 18.110 - 18.204: 99.2976% ( 1) 00:10:54.316 18.204 - 18.299: 99.3053% ( 1) 00:10:54.316 18.299 - 18.394: 99.3207% ( 2) 00:10:54.316 18.394 - 18.489: 99.3284% ( 1) 00:10:54.316 18.679 - 18.773: 99.3362% ( 1) 00:10:54.316 20.006 - 20.101: 99.3439% ( 1) 00:10:54.316 29.013 - 29.203: 99.3516% ( 1) 00:10:54.316 33.375 - 33.564: 99.3593% ( 1) 00:10:54.316 2002.489 - 2014.625: 99.3670% ( 1) 00:10:54.316 2184.533 - 2196.670: 99.3748% ( 1) 00:10:54.316 2997.665 - 3009.801: 99.3825% ( 1) 00:10:54.316 3980.705 - 4004.978: 99.8842% ( 65) 00:10:54.316 4004.978 - 4029.250: 99.9923% ( 14) 00:10:54.316 5995.330 - 6019.603: 100.0000% ( 1) 00:10:54.316 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:54.316 [ 00:10:54.316 { 00:10:54.316 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:54.316 "subtype": "Discovery", 00:10:54.316 "listen_addresses": [], 00:10:54.316 "allow_any_host": true, 00:10:54.316 "hosts": [] 00:10:54.316 }, 00:10:54.316 { 00:10:54.316 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:54.316 "subtype": "NVMe", 00:10:54.316 "listen_addresses": [ 00:10:54.316 { 00:10:54.316 "trtype": "VFIOUSER", 00:10:54.316 "adrfam": "IPv4", 00:10:54.316 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:54.316 "trsvcid": "0" 00:10:54.316 } 00:10:54.316 ], 00:10:54.316 "allow_any_host": true, 00:10:54.316 "hosts": [], 00:10:54.316 "serial_number": "SPDK1", 00:10:54.316 "model_number": "SPDK bdev Controller", 00:10:54.316 "max_namespaces": 32, 00:10:54.316 "min_cntlid": 1, 00:10:54.316 "max_cntlid": 65519, 00:10:54.316 "namespaces": [ 00:10:54.316 { 00:10:54.316 "nsid": 1, 00:10:54.316 "bdev_name": "Malloc1", 00:10:54.316 "name": "Malloc1", 00:10:54.316 "nguid": "DF9B9183405A4B7FA20724585424C2B4", 00:10:54.316 "uuid": "df9b9183-405a-4b7f-a207-24585424c2b4" 00:10:54.316 }, 00:10:54.316 { 00:10:54.316 "nsid": 2, 00:10:54.316 "bdev_name": "Malloc3", 00:10:54.316 "name": "Malloc3", 00:10:54.316 "nguid": "9033E32F29B6447BA33906B6534CC5BD", 00:10:54.316 "uuid": "9033e32f-29b6-447b-a339-06b6534cc5bd" 00:10:54.316 } 00:10:54.316 ] 00:10:54.316 }, 00:10:54.316 { 00:10:54.316 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:54.316 "subtype": "NVMe", 00:10:54.316 "listen_addresses": [ 00:10:54.316 { 00:10:54.316 "trtype": "VFIOUSER", 00:10:54.316 "adrfam": "IPv4", 00:10:54.316 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:54.316 "trsvcid": "0" 00:10:54.316 } 00:10:54.316 ], 00:10:54.316 "allow_any_host": true, 00:10:54.316 "hosts": [], 00:10:54.316 "serial_number": "SPDK2", 00:10:54.316 "model_number": "SPDK bdev Controller", 00:10:54.316 "max_namespaces": 32, 00:10:54.316 "min_cntlid": 1, 00:10:54.316 "max_cntlid": 65519, 00:10:54.316 "namespaces": [ 00:10:54.316 { 00:10:54.316 "nsid": 1, 00:10:54.316 "bdev_name": "Malloc2", 00:10:54.316 "name": "Malloc2", 00:10:54.316 "nguid": "B143040D30CA473BA1EF3698F2FCC544", 00:10:54.316 "uuid": "b143040d-30ca-473b-a1ef-3698f2fcc544" 00:10:54.316 } 00:10:54.316 ] 00:10:54.316 } 00:10:54.316 ] 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=873918 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:54.316 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:54.316 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.574 [2024-07-12 11:47:43.833301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:54.574 Malloc4 00:10:54.574 11:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:54.831 [2024-07-12 11:47:44.177756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:54.831 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:54.831 Asynchronous Event Request test 00:10:54.831 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.831 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.831 Registering asynchronous event callbacks... 00:10:54.831 Starting namespace attribute notice tests for all controllers... 00:10:54.831 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:54.831 aer_cb - Changed Namespace 00:10:54.831 Cleaning up... 00:10:55.090 [ 00:10:55.090 { 00:10:55.090 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:55.090 "subtype": "Discovery", 00:10:55.090 "listen_addresses": [], 00:10:55.090 "allow_any_host": true, 00:10:55.090 "hosts": [] 00:10:55.090 }, 00:10:55.090 { 00:10:55.090 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:55.090 "subtype": "NVMe", 00:10:55.090 "listen_addresses": [ 00:10:55.090 { 00:10:55.090 "trtype": "VFIOUSER", 00:10:55.090 "adrfam": "IPv4", 00:10:55.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:55.090 "trsvcid": "0" 00:10:55.090 } 00:10:55.090 ], 00:10:55.090 "allow_any_host": true, 00:10:55.090 "hosts": [], 00:10:55.090 "serial_number": "SPDK1", 00:10:55.090 "model_number": "SPDK bdev Controller", 00:10:55.090 "max_namespaces": 32, 00:10:55.090 "min_cntlid": 1, 00:10:55.090 "max_cntlid": 65519, 00:10:55.090 "namespaces": [ 00:10:55.090 { 00:10:55.090 "nsid": 1, 00:10:55.090 "bdev_name": "Malloc1", 00:10:55.090 "name": "Malloc1", 00:10:55.090 "nguid": "DF9B9183405A4B7FA20724585424C2B4", 00:10:55.090 "uuid": "df9b9183-405a-4b7f-a207-24585424c2b4" 00:10:55.090 }, 00:10:55.090 { 00:10:55.090 "nsid": 2, 00:10:55.090 "bdev_name": "Malloc3", 00:10:55.090 "name": "Malloc3", 00:10:55.090 "nguid": "9033E32F29B6447BA33906B6534CC5BD", 00:10:55.090 "uuid": "9033e32f-29b6-447b-a339-06b6534cc5bd" 00:10:55.090 } 00:10:55.090 ] 00:10:55.090 }, 00:10:55.090 { 00:10:55.090 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:55.090 "subtype": "NVMe", 00:10:55.090 "listen_addresses": [ 00:10:55.090 { 00:10:55.090 "trtype": "VFIOUSER", 00:10:55.090 "adrfam": "IPv4", 00:10:55.090 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:55.090 "trsvcid": "0" 00:10:55.090 } 00:10:55.090 ], 00:10:55.090 "allow_any_host": true, 00:10:55.090 "hosts": [], 00:10:55.090 "serial_number": "SPDK2", 00:10:55.090 "model_number": "SPDK bdev Controller", 00:10:55.090 "max_namespaces": 32, 00:10:55.090 "min_cntlid": 1, 00:10:55.090 "max_cntlid": 65519, 00:10:55.090 "namespaces": [ 00:10:55.090 { 00:10:55.090 "nsid": 1, 00:10:55.090 "bdev_name": "Malloc2", 00:10:55.090 "name": "Malloc2", 00:10:55.090 "nguid": "B143040D30CA473BA1EF3698F2FCC544", 00:10:55.090 "uuid": "b143040d-30ca-473b-a1ef-3698f2fcc544" 00:10:55.090 }, 00:10:55.090 { 00:10:55.090 "nsid": 2, 00:10:55.090 "bdev_name": "Malloc4", 00:10:55.090 "name": "Malloc4", 00:10:55.090 "nguid": "6E13CD00B78641779AEA27516281A205", 00:10:55.090 "uuid": "6e13cd00-b786-4177-9aea-27516281a205" 00:10:55.090 } 00:10:55.090 ] 00:10:55.090 } 00:10:55.090 ] 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 873918 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 868309 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 868309 ']' 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 868309 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 868309 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 868309' 00:10:55.090 killing process with pid 868309 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 868309 00:10:55.090 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 868309 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=874060 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 874060' 00:10:55.661 Process pid: 874060 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 874060 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 874060 ']' 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:55.661 11:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:55.661 [2024-07-12 11:47:44.895852] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:55.661 [2024-07-12 11:47:44.896840] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:10:55.661 [2024-07-12 11:47:44.896909] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.661 [2024-07-12 11:47:44.961401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.661 [2024-07-12 11:47:45.080567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.661 [2024-07-12 11:47:45.080621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.661 [2024-07-12 11:47:45.080638] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.661 [2024-07-12 11:47:45.080651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.661 [2024-07-12 11:47:45.080663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.661 [2024-07-12 11:47:45.080731] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.661 [2024-07-12 11:47:45.080785] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.661 [2024-07-12 11:47:45.080900] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.661 [2024-07-12 11:47:45.080904] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.922 [2024-07-12 11:47:45.187720] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:55.922 [2024-07-12 11:47:45.187970] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:55.922 [2024-07-12 11:47:45.188219] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:55.922 [2024-07-12 11:47:45.188871] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:55.922 [2024-07-12 11:47:45.189111] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:56.490 11:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:56.490 11:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:10:56.490 11:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:57.425 11:47:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:57.993 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:57.993 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:57.993 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:57.993 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:57.993 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:57.993 Malloc1 00:10:58.251 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:58.510 11:47:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:58.768 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:59.026 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:59.026 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:59.026 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:59.283 Malloc2 00:10:59.283 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:59.541 11:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:59.799 11:47:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 874060 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 874060 ']' 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 874060 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 874060 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 874060' 00:11:00.058 killing process with pid 874060 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 874060 00:11:00.058 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 874060 00:11:00.317 11:47:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:00.317 11:47:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:00.317 00:11:00.317 real 0m53.582s 00:11:00.317 user 3m30.876s 00:11:00.317 sys 0m4.773s 00:11:00.317 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:00.317 11:47:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:00.317 ************************************ 00:11:00.317 END TEST nvmf_vfio_user 00:11:00.317 ************************************ 00:11:00.317 11:47:49 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:00.317 11:47:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:00.317 11:47:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:00.317 11:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.575 ************************************ 00:11:00.575 START TEST nvmf_vfio_user_nvme_compliance 00:11:00.575 ************************************ 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:00.575 * Looking for test storage... 00:11:00.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=874787 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 874787' 00:11:00.575 Process pid: 874787 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 874787 00:11:00.575 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 874787 ']' 00:11:00.576 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.576 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:00.576 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.576 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:00.576 11:47:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:00.576 [2024-07-12 11:47:49.931676] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:11:00.576 [2024-07-12 11:47:49.931760] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.576 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.576 [2024-07-12 11:47:49.990096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.833 [2024-07-12 11:47:50.104677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.833 [2024-07-12 11:47:50.104731] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.833 [2024-07-12 11:47:50.104761] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.833 [2024-07-12 11:47:50.104773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.833 [2024-07-12 11:47:50.104784] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.833 [2024-07-12 11:47:50.107892] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.833 [2024-07-12 11:47:50.107943] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.833 [2024-07-12 11:47:50.107947] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.833 11:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:00.833 11:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:11:00.833 11:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:01.770 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:02.029 malloc0 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.029 11:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:02.029 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.029 00:11:02.029 00:11:02.029 CUnit - A unit testing framework for C - Version 2.1-3 00:11:02.029 http://cunit.sourceforge.net/ 00:11:02.029 00:11:02.029 00:11:02.029 Suite: nvme_compliance 00:11:02.029 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 11:47:51.458438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.029 [2024-07-12 11:47:51.459863] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:02.029 [2024-07-12 11:47:51.459908] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:02.029 [2024-07-12 11:47:51.459921] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:02.030 [2024-07-12 11:47:51.464473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.030 passed 00:11:02.287 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 11:47:51.548082] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.287 [2024-07-12 11:47:51.551101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.287 passed 00:11:02.287 Test: admin_identify_ns ...[2024-07-12 11:47:51.639361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.287 [2024-07-12 11:47:51.698898] ctrlr.c:2707:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.287 [2024-07-12 11:47:51.706906] ctrlr.c:2707:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:02.287 [2024-07-12 11:47:51.728006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.287 passed 00:11:02.545 Test: admin_get_features_mandatory_features ...[2024-07-12 11:47:51.812690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.545 [2024-07-12 11:47:51.815713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.545 passed 00:11:02.545 Test: admin_get_features_optional_features ...[2024-07-12 11:47:51.896244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.545 [2024-07-12 11:47:51.899263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.545 passed 00:11:02.545 Test: admin_set_features_number_of_queues ...[2024-07-12 11:47:51.985553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.805 [2024-07-12 11:47:52.090009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.805 passed 00:11:02.805 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 11:47:52.172908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.805 [2024-07-12 11:47:52.175942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.805 passed 00:11:02.805 Test: admin_get_log_page_with_lpo ...[2024-07-12 11:47:52.258632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.065 [2024-07-12 11:47:52.325920] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:03.065 [2024-07-12 11:47:52.338979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.065 passed 00:11:03.065 Test: fabric_property_get ...[2024-07-12 11:47:52.424235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.065 [2024-07-12 11:47:52.425510] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:03.065 [2024-07-12 11:47:52.427261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.065 passed 00:11:03.065 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 11:47:52.509813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.065 [2024-07-12 11:47:52.511114] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:03.065 [2024-07-12 11:47:52.512835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.065 passed 00:11:03.325 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 11:47:52.597367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.325 [2024-07-12 11:47:52.680889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:03.325 [2024-07-12 11:47:52.696891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:03.325 [2024-07-12 11:47:52.701981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.325 passed 00:11:03.325 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 11:47:52.785540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.325 [2024-07-12 11:47:52.786797] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:03.325 [2024-07-12 11:47:52.788558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.583 passed 00:11:03.583 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 11:47:52.872705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.583 [2024-07-12 11:47:52.947890] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:03.583 [2024-07-12 11:47:52.971876] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:03.583 [2024-07-12 11:47:52.976988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.583 passed 00:11:03.583 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 11:47:53.059538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.583 [2024-07-12 11:47:53.060796] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:03.583 [2024-07-12 11:47:53.060850] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:03.583 [2024-07-12 11:47:53.062558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.841 passed 00:11:03.841 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 11:47:53.148789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.841 [2024-07-12 11:47:53.245893] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:03.841 [2024-07-12 11:47:53.253876] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:03.841 [2024-07-12 11:47:53.261893] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:03.841 [2024-07-12 11:47:53.269893] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:03.841 [2024-07-12 11:47:53.298999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.841 passed 00:11:04.099 Test: admin_create_io_sq_verify_pc ...[2024-07-12 11:47:53.382936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.099 [2024-07-12 11:47:53.398890] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:04.099 [2024-07-12 11:47:53.417307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.099 passed 00:11:04.099 Test: admin_create_io_qp_max_qps ...[2024-07-12 11:47:53.500850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.498 [2024-07-12 11:47:54.597908] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:05.498 [2024-07-12 11:47:54.972940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.800 passed 00:11:05.800 Test: admin_create_io_sq_shared_cq ...[2024-07-12 11:47:55.059363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.800 [2024-07-12 11:47:55.190873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:05.800 [2024-07-12 11:47:55.227973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.800 passed 00:11:05.800 00:11:05.800 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.800 suites 1 1 n/a 0 0 00:11:05.800 tests 18 18 18 0 0 00:11:05.800 asserts 360 360 360 0 n/a 00:11:05.800 00:11:05.800 Elapsed time = 1.561 seconds 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 874787 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 874787 ']' 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 874787 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:06.057 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 874787 00:11:06.058 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:06.058 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:06.058 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 874787' 00:11:06.058 killing process with pid 874787 00:11:06.058 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 874787 00:11:06.058 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 874787 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:06.316 00:11:06.316 real 0m5.797s 00:11:06.316 user 0m16.150s 00:11:06.316 sys 0m0.585s 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:06.316 ************************************ 00:11:06.316 END TEST nvmf_vfio_user_nvme_compliance 00:11:06.316 ************************************ 00:11:06.316 11:47:55 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:06.316 11:47:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:06.316 11:47:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:06.316 11:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.316 ************************************ 00:11:06.316 START TEST nvmf_vfio_user_fuzz 00:11:06.316 ************************************ 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:06.316 * Looking for test storage... 00:11:06.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.316 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=875517 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 875517' 00:11:06.317 Process pid: 875517 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 875517 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 875517 ']' 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:06.317 11:47:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:06.576 11:47:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:06.576 11:47:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:11:06.576 11:47:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:07.955 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 malloc0 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:07.956 11:47:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:40.018 Fuzzing completed. Shutting down the fuzz application 00:11:40.018 00:11:40.018 Dumping successful admin opcodes: 00:11:40.018 8, 9, 10, 24, 00:11:40.018 Dumping successful io opcodes: 00:11:40.018 0, 00:11:40.018 NS: 0x200003a1ef00 I/O qp, Total commands completed: 556623, total successful commands: 2140, random_seed: 2832732160 00:11:40.018 NS: 0x200003a1ef00 admin qp, Total commands completed: 100135, total successful commands: 821, random_seed: 1777349760 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 875517 ']' 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 875517' 00:11:40.018 killing process with pid 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 875517 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:40.018 00:11:40.018 real 0m32.318s 00:11:40.018 user 0m31.033s 00:11:40.018 sys 0m28.018s 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:40.018 11:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 ************************************ 00:11:40.018 END TEST nvmf_vfio_user_fuzz 00:11:40.018 ************************************ 00:11:40.018 11:48:27 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:40.018 11:48:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:40.018 11:48:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:40.018 11:48:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 ************************************ 00:11:40.018 START TEST nvmf_host_management 00:11:40.018 ************************************ 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:40.018 * Looking for test storage... 00:11:40.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:40.018 11:48:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.585 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:40.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:40.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:40.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:40.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.586 11:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:11:40.586 00:11:40.586 --- 10.0.0.2 ping statistics --- 00:11:40.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.586 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:11:40.586 00:11:40.586 --- 10.0.0.1 ping statistics --- 00:11:40.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.586 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.586 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=881480 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 881480 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 881480 ']' 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.587 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.846 [2024-07-12 11:48:30.095629] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:11:40.846 [2024-07-12 11:48:30.095717] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.846 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.846 [2024-07-12 11:48:30.164522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.846 [2024-07-12 11:48:30.282497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.846 [2024-07-12 11:48:30.282547] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.846 [2024-07-12 11:48:30.282575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.846 [2024-07-12 11:48:30.282587] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.846 [2024-07-12 11:48:30.282598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.846 [2024-07-12 11:48:30.282685] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.846 [2024-07-12 11:48:30.282749] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.846 [2024-07-12 11:48:30.282797] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.846 [2024-07-12 11:48:30.282800] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.104 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 [2024-07-12 11:48:30.446837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 Malloc0 00:11:41.105 [2024-07-12 11:48:30.508336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=881633 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 881633 /var/tmp/bdevperf.sock 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 881633 ']' 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:41.105 { 00:11:41.105 "params": { 00:11:41.105 "name": "Nvme$subsystem", 00:11:41.105 "trtype": "$TEST_TRANSPORT", 00:11:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:41.105 "adrfam": "ipv4", 00:11:41.105 "trsvcid": "$NVMF_PORT", 00:11:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:41.105 "hdgst": ${hdgst:-false}, 00:11:41.105 "ddgst": ${ddgst:-false} 00:11:41.105 }, 00:11:41.105 "method": "bdev_nvme_attach_controller" 00:11:41.105 } 00:11:41.105 EOF 00:11:41.105 )") 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:41.105 11:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:41.105 "params": { 00:11:41.105 "name": "Nvme0", 00:11:41.105 "trtype": "tcp", 00:11:41.105 "traddr": "10.0.0.2", 00:11:41.105 "adrfam": "ipv4", 00:11:41.105 "trsvcid": "4420", 00:11:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:41.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:41.105 "hdgst": false, 00:11:41.105 "ddgst": false 00:11:41.105 }, 00:11:41.105 "method": "bdev_nvme_attach_controller" 00:11:41.105 }' 00:11:41.105 [2024-07-12 11:48:30.588094] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:11:41.105 [2024-07-12 11:48:30.588194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881633 ] 00:11:41.363 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.363 [2024-07-12 11:48:30.648578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.363 [2024-07-12 11:48:30.758592] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.621 Running I/O for 10 seconds... 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.190 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.190 [2024-07-12 11:48:31.588273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.588982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.588996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.190 [2024-07-12 11:48:31.589297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.190 [2024-07-12 11:48:31.589311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.589968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.589983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.191 [2024-07-12 11:48:31.590305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20577d0 is same with the state(5) to be set 00:11:42.191 [2024-07-12 11:48:31.590400] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20577d0 was disconnected and freed. reset controller. 00:11:42.191 [2024-07-12 11:48:31.590487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.191 [2024-07-12 11:48:31.590510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.191 [2024-07-12 11:48:31.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.191 [2024-07-12 11:48:31.590574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.191 [2024-07-12 11:48:31.590602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.191 [2024-07-12 11:48:31.590615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46940 is same with the state(5) to be set 00:11:42.191 [2024-07-12 11:48:31.591769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.191 task offset: 114176 on job bdev=Nvme0n1 fails 00:11:42.191 00:11:42.191 Latency(us) 00:11:42.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:42.191 Job: Nvme0n1 ended in about 0.53 seconds with error 00:11:42.191 Verification LBA range: start 0x0 length 0x400 00:11:42.191 Nvme0n1 : 0.53 1582.08 98.88 121.70 0.00 36678.54 2924.85 32622.36 00:11:42.191 =================================================================================================================== 00:11:42.191 Total : 1582.08 98.88 121.70 0.00 36678.54 2924.85 32622.36 00:11:42.191 [2024-07-12 11:48:31.593645] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:42.191 [2024-07-12 11:48:31.593673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46940 (9): Bad file descriptor 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.191 11:48:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:42.191 [2024-07-12 11:48:31.604197] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 881633 00:11:43.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (881633) - No such process 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:43.124 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:43.124 { 00:11:43.124 "params": { 00:11:43.124 "name": "Nvme$subsystem", 00:11:43.124 "trtype": "$TEST_TRANSPORT", 00:11:43.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:43.125 "adrfam": "ipv4", 00:11:43.125 "trsvcid": "$NVMF_PORT", 00:11:43.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:43.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:43.125 "hdgst": ${hdgst:-false}, 00:11:43.125 "ddgst": ${ddgst:-false} 00:11:43.125 }, 00:11:43.125 "method": "bdev_nvme_attach_controller" 00:11:43.125 } 00:11:43.125 EOF 00:11:43.125 )") 00:11:43.125 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:43.125 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:43.125 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:43.125 11:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:43.125 "params": { 00:11:43.125 "name": "Nvme0", 00:11:43.125 "trtype": "tcp", 00:11:43.125 "traddr": "10.0.0.2", 00:11:43.125 "adrfam": "ipv4", 00:11:43.125 "trsvcid": "4420", 00:11:43.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:43.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:43.125 "hdgst": false, 00:11:43.125 "ddgst": false 00:11:43.125 }, 00:11:43.125 "method": "bdev_nvme_attach_controller" 00:11:43.125 }' 00:11:43.383 [2024-07-12 11:48:32.644368] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:11:43.383 [2024-07-12 11:48:32.644441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881912 ] 00:11:43.383 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.383 [2024-07-12 11:48:32.704767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.383 [2024-07-12 11:48:32.819181] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.641 Running I/O for 1 seconds... 00:11:44.575 00:11:44.575 Latency(us) 00:11:44.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.575 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:44.575 Verification LBA range: start 0x0 length 0x400 00:11:44.575 Nvme0n1 : 1.00 1592.12 99.51 0.00 0.00 39555.16 7718.68 36117.62 00:11:44.575 =================================================================================================================== 00:11:44.575 Total : 1592.12 99.51 0.00 0.00 39555.16 7718.68 36117.62 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.833 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.833 rmmod nvme_tcp 00:11:44.833 rmmod nvme_fabrics 00:11:45.092 rmmod nvme_keyring 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 881480 ']' 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 881480 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 881480 ']' 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 881480 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 881480 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 881480' 00:11:45.092 killing process with pid 881480 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 881480 00:11:45.092 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 881480 00:11:45.351 [2024-07-12 11:48:34.661145] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.351 11:48:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.254 11:48:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:47.254 11:48:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:47.254 00:11:47.254 real 0m8.704s 00:11:47.254 user 0m20.382s 00:11:47.254 sys 0m2.674s 00:11:47.254 11:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:47.254 11:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.254 ************************************ 00:11:47.254 END TEST nvmf_host_management 00:11:47.254 ************************************ 00:11:47.513 11:48:36 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:47.513 11:48:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:47.513 11:48:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:47.513 11:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:47.513 ************************************ 00:11:47.513 START TEST nvmf_lvol 00:11:47.513 ************************************ 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:47.513 * Looking for test storage... 00:11:47.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.513 11:48:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:49.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:49.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.414 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:49.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:49.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.415 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:11:49.673 00:11:49.673 --- 10.0.0.2 ping statistics --- 00:11:49.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.673 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:11:49.673 00:11:49.673 --- 10.0.0.1 ping statistics --- 00:11:49.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.673 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:49.673 11:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=883983 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 883983 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 883983 ']' 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:49.673 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.673 [2024-07-12 11:48:39.052410] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:11:49.673 [2024-07-12 11:48:39.052508] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.673 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.673 [2024-07-12 11:48:39.128032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.932 [2024-07-12 11:48:39.247831] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.932 [2024-07-12 11:48:39.247909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.932 [2024-07-12 11:48:39.247926] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.932 [2024-07-12 11:48:39.247939] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.932 [2024-07-12 11:48:39.247951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.932 [2024-07-12 11:48:39.248029] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.932 [2024-07-12 11:48:39.248113] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.932 [2024-07-12 11:48:39.248121] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.932 11:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.190 [2024-07-12 11:48:39.590583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.190 11:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.447 11:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:50.447 11:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.706 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:50.706 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:50.963 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:51.257 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8d0ea943-0a39-4a14-b339-60f699bb375c 00:11:51.257 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d0ea943-0a39-4a14-b339-60f699bb375c lvol 20 00:11:51.516 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3a0a22c0-4878-45f0-964b-632a78756fe6 00:11:51.516 11:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:51.773 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a0a22c0-4878-45f0-964b-632a78756fe6 00:11:52.031 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:52.288 [2024-07-12 11:48:41.661513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.288 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.546 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=884415 00:11:52.546 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:52.546 11:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:52.546 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.480 11:48:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3a0a22c0-4878-45f0-964b-632a78756fe6 MY_SNAPSHOT 00:11:53.738 11:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cc0a7c18-5f65-46b4-98db-295611f941a2 00:11:53.738 11:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3a0a22c0-4878-45f0-964b-632a78756fe6 30 00:11:54.303 11:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cc0a7c18-5f65-46b4-98db-295611f941a2 MY_CLONE 00:11:54.562 11:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a59156ea-af25-4151-9413-ab011320a9bf 00:11:54.562 11:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a59156ea-af25-4151-9413-ab011320a9bf 00:11:55.130 11:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 884415 00:12:03.235 Initializing NVMe Controllers 00:12:03.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:03.235 Controller IO queue size 128, less than required. 00:12:03.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:03.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:03.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:03.235 Initialization complete. Launching workers. 00:12:03.235 ======================================================== 00:12:03.235 Latency(us) 00:12:03.235 Device Information : IOPS MiB/s Average min max 00:12:03.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10495.10 41.00 12201.75 1363.24 70512.18 00:12:03.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10438.10 40.77 12262.60 2371.45 62823.18 00:12:03.235 ======================================================== 00:12:03.235 Total : 20933.20 81.77 12232.09 1363.24 70512.18 00:12:03.235 00:12:03.235 11:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:03.235 11:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a0a22c0-4878-45f0-964b-632a78756fe6 00:12:03.493 11:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d0ea943-0a39-4a14-b339-60f699bb375c 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:03.750 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.751 rmmod nvme_tcp 00:12:03.751 rmmod nvme_fabrics 00:12:03.751 rmmod nvme_keyring 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 883983 ']' 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 883983 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 883983 ']' 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 883983 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 883983 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 883983' 00:12:03.751 killing process with pid 883983 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 883983 00:12:03.751 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 883983 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.314 11:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.213 00:12:06.213 real 0m18.808s 00:12:06.213 user 1m4.097s 00:12:06.213 sys 0m5.505s 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:06.213 ************************************ 00:12:06.213 END TEST nvmf_lvol 00:12:06.213 ************************************ 00:12:06.213 11:48:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:06.213 11:48:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:06.213 11:48:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:06.213 11:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.213 ************************************ 00:12:06.213 START TEST nvmf_lvs_grow 00:12:06.213 ************************************ 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:06.213 * Looking for test storage... 00:12:06.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.213 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.214 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.471 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.472 11:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.472 11:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.371 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:12:08.372 00:12:08.372 --- 10.0.0.2 ping statistics --- 00:12:08.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.372 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:12:08.372 00:12:08.372 --- 10.0.0.1 ping statistics --- 00:12:08.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.372 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=887680 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 887680 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 887680 ']' 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:08.372 11:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.630 [2024-07-12 11:48:57.905478] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:08.630 [2024-07-12 11:48:57.905571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.630 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.630 [2024-07-12 11:48:57.970530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.630 [2024-07-12 11:48:58.081279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.630 [2024-07-12 11:48:58.081356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.630 [2024-07-12 11:48:58.081385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.630 [2024-07-12 11:48:58.081398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.630 [2024-07-12 11:48:58.081408] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.630 [2024-07-12 11:48:58.081451] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.888 11:48:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:09.145 [2024-07-12 11:48:58.496463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.145 ************************************ 00:12:09.145 START TEST lvs_grow_clean 00:12:09.145 ************************************ 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.145 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.402 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:09.402 11:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:09.660 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:09.660 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:09.660 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:09.918 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:09.918 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:09.918 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 lvol 150 00:12:10.176 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f96a09c9-c01d-4d63-96be-1e3b352b8dea 00:12:10.176 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.176 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:10.434 [2024-07-12 11:48:59.794011] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:10.434 [2024-07-12 11:48:59.794091] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:10.434 true 00:12:10.434 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:10.434 11:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:10.692 11:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:10.692 11:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:10.950 11:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f96a09c9-c01d-4d63-96be-1e3b352b8dea 00:12:11.209 11:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:11.468 [2024-07-12 11:49:00.881298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.468 11:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=888114 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 888114 /var/tmp/bdevperf.sock 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 888114 ']' 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:11.726 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 [2024-07-12 11:49:01.181458] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:11.726 [2024-07-12 11:49:01.181541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888114 ] 00:12:11.726 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.985 [2024-07-12 11:49:01.244507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.985 [2024-07-12 11:49:01.362380] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.985 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:11.985 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:12:11.985 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:12.551 Nvme0n1 00:12:12.551 11:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:12.809 [ 00:12:12.809 { 00:12:12.809 "name": "Nvme0n1", 00:12:12.809 "aliases": [ 00:12:12.809 "f96a09c9-c01d-4d63-96be-1e3b352b8dea" 00:12:12.809 ], 00:12:12.809 "product_name": "NVMe disk", 00:12:12.809 "block_size": 4096, 00:12:12.809 "num_blocks": 38912, 00:12:12.809 "uuid": "f96a09c9-c01d-4d63-96be-1e3b352b8dea", 00:12:12.809 "assigned_rate_limits": { 00:12:12.809 "rw_ios_per_sec": 0, 00:12:12.809 "rw_mbytes_per_sec": 0, 00:12:12.809 "r_mbytes_per_sec": 0, 00:12:12.809 "w_mbytes_per_sec": 0 00:12:12.809 }, 00:12:12.809 "claimed": false, 00:12:12.809 "zoned": false, 00:12:12.809 "supported_io_types": { 00:12:12.809 "read": true, 00:12:12.809 "write": true, 00:12:12.809 "unmap": true, 00:12:12.809 "write_zeroes": true, 00:12:12.809 "flush": true, 00:12:12.809 "reset": true, 00:12:12.809 "compare": true, 00:12:12.809 "compare_and_write": true, 00:12:12.809 "abort": true, 00:12:12.809 "nvme_admin": true, 00:12:12.809 "nvme_io": true 00:12:12.809 }, 00:12:12.809 "memory_domains": [ 00:12:12.809 { 00:12:12.809 "dma_device_id": "system", 00:12:12.809 "dma_device_type": 1 00:12:12.809 } 00:12:12.809 ], 00:12:12.809 "driver_specific": { 00:12:12.809 "nvme": [ 00:12:12.809 { 00:12:12.809 "trid": { 00:12:12.809 "trtype": "TCP", 00:12:12.809 "adrfam": "IPv4", 00:12:12.809 "traddr": "10.0.0.2", 00:12:12.809 "trsvcid": "4420", 00:12:12.809 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:12.809 }, 00:12:12.809 "ctrlr_data": { 00:12:12.809 "cntlid": 1, 00:12:12.809 "vendor_id": "0x8086", 00:12:12.809 "model_number": "SPDK bdev Controller", 00:12:12.809 "serial_number": "SPDK0", 00:12:12.809 "firmware_revision": "24.09", 00:12:12.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:12.809 "oacs": { 00:12:12.809 "security": 0, 00:12:12.809 "format": 0, 00:12:12.809 "firmware": 0, 00:12:12.809 "ns_manage": 0 00:12:12.810 }, 00:12:12.810 "multi_ctrlr": true, 00:12:12.810 "ana_reporting": false 00:12:12.810 }, 00:12:12.810 "vs": { 00:12:12.810 "nvme_version": "1.3" 00:12:12.810 }, 00:12:12.810 "ns_data": { 00:12:12.810 "id": 1, 00:12:12.810 "can_share": true 00:12:12.810 } 00:12:12.810 } 00:12:12.810 ], 00:12:12.810 "mp_policy": "active_passive" 00:12:12.810 } 00:12:12.810 } 00:12:12.810 ] 00:12:12.810 11:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=888249 00:12:12.810 11:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:12.810 11:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:12.810 Running I/O for 10 seconds... 00:12:14.187 Latency(us) 00:12:14.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.187 Nvme0n1 : 1.00 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:12:14.187 =================================================================================================================== 00:12:14.187 Total : 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:12:14.187 00:12:14.799 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:14.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.800 Nvme0n1 : 2.00 14161.00 55.32 0.00 0.00 0.00 0.00 0.00 00:12:14.800 =================================================================================================================== 00:12:14.800 Total : 14161.00 55.32 0.00 0.00 0.00 0.00 0.00 00:12:14.800 00:12:15.058 true 00:12:15.058 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:15.058 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:15.316 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:15.316 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:15.316 11:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 888249 00:12:15.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.882 Nvme0n1 : 3.00 14393.67 56.23 0.00 0.00 0.00 0.00 0.00 00:12:15.882 =================================================================================================================== 00:12:15.882 Total : 14393.67 56.23 0.00 0.00 0.00 0.00 0.00 00:12:15.882 00:12:16.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.818 Nvme0n1 : 4.00 14446.50 56.43 0.00 0.00 0.00 0.00 0.00 00:12:16.818 =================================================================================================================== 00:12:16.818 Total : 14446.50 56.43 0.00 0.00 0.00 0.00 0.00 00:12:16.818 00:12:18.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.193 Nvme0n1 : 5.00 14529.00 56.75 0.00 0.00 0.00 0.00 0.00 00:12:18.193 =================================================================================================================== 00:12:18.193 Total : 14529.00 56.75 0.00 0.00 0.00 0.00 0.00 00:12:18.193 00:12:19.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.128 Nvme0n1 : 6.00 14584.00 56.97 0.00 0.00 0.00 0.00 0.00 00:12:19.128 =================================================================================================================== 00:12:19.128 Total : 14584.00 56.97 0.00 0.00 0.00 0.00 0.00 00:12:19.128 00:12:20.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.063 Nvme0n1 : 7.00 14605.14 57.05 0.00 0.00 0.00 0.00 0.00 00:12:20.063 =================================================================================================================== 00:12:20.063 Total : 14605.14 57.05 0.00 0.00 0.00 0.00 0.00 00:12:20.063 00:12:20.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.999 Nvme0n1 : 8.00 14605.50 57.05 0.00 0.00 0.00 0.00 0.00 00:12:20.999 =================================================================================================================== 00:12:20.999 Total : 14605.50 57.05 0.00 0.00 0.00 0.00 0.00 00:12:20.999 00:12:21.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.934 Nvme0n1 : 9.00 14619.56 57.11 0.00 0.00 0.00 0.00 0.00 00:12:21.934 =================================================================================================================== 00:12:21.934 Total : 14619.56 57.11 0.00 0.00 0.00 0.00 0.00 00:12:21.934 00:12:22.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.870 Nvme0n1 : 10.00 14630.80 57.15 0.00 0.00 0.00 0.00 0.00 00:12:22.870 =================================================================================================================== 00:12:22.870 Total : 14630.80 57.15 0.00 0.00 0.00 0.00 0.00 00:12:22.870 00:12:22.870 00:12:22.870 Latency(us) 00:12:22.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.870 Nvme0n1 : 10.01 14635.73 57.17 0.00 0.00 8740.94 5267.15 21554.06 00:12:22.870 =================================================================================================================== 00:12:22.870 Total : 14635.73 57.17 0.00 0.00 8740.94 5267.15 21554.06 00:12:22.870 0 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 888114 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 888114 ']' 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 888114 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 888114 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 888114' 00:12:22.870 killing process with pid 888114 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 888114 00:12:22.870 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.870 00:12:22.870 Latency(us) 00:12:22.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.870 =================================================================================================================== 00:12:22.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.870 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 888114 00:12:23.128 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.386 11:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:23.644 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:23.644 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:23.901 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:23.901 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:23.901 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:24.160 [2024-07-12 11:49:13.598605] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:24.160 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:24.418 request: 00:12:24.418 { 00:12:24.418 "uuid": "4a586ec6-734a-4094-9fc3-1da3ab2f1dc8", 00:12:24.418 "method": "bdev_lvol_get_lvstores", 00:12:24.418 "req_id": 1 00:12:24.418 } 00:12:24.418 Got JSON-RPC error response 00:12:24.418 response: 00:12:24.418 { 00:12:24.418 "code": -19, 00:12:24.418 "message": "No such device" 00:12:24.418 } 00:12:24.676 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:12:24.676 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:24.676 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:24.676 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:24.676 11:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:24.676 aio_bdev 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f96a09c9-c01d-4d63-96be-1e3b352b8dea 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=f96a09c9-c01d-4d63-96be-1e3b352b8dea 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:24.676 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:24.934 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f96a09c9-c01d-4d63-96be-1e3b352b8dea -t 2000 00:12:25.192 [ 00:12:25.192 { 00:12:25.192 "name": "f96a09c9-c01d-4d63-96be-1e3b352b8dea", 00:12:25.192 "aliases": [ 00:12:25.192 "lvs/lvol" 00:12:25.192 ], 00:12:25.192 "product_name": "Logical Volume", 00:12:25.192 "block_size": 4096, 00:12:25.192 "num_blocks": 38912, 00:12:25.192 "uuid": "f96a09c9-c01d-4d63-96be-1e3b352b8dea", 00:12:25.192 "assigned_rate_limits": { 00:12:25.192 "rw_ios_per_sec": 0, 00:12:25.192 "rw_mbytes_per_sec": 0, 00:12:25.192 "r_mbytes_per_sec": 0, 00:12:25.192 "w_mbytes_per_sec": 0 00:12:25.192 }, 00:12:25.192 "claimed": false, 00:12:25.192 "zoned": false, 00:12:25.192 "supported_io_types": { 00:12:25.192 "read": true, 00:12:25.192 "write": true, 00:12:25.192 "unmap": true, 00:12:25.192 "write_zeroes": true, 00:12:25.192 "flush": false, 00:12:25.192 "reset": true, 00:12:25.192 "compare": false, 00:12:25.192 "compare_and_write": false, 00:12:25.192 "abort": false, 00:12:25.192 "nvme_admin": false, 00:12:25.192 "nvme_io": false 00:12:25.192 }, 00:12:25.192 "driver_specific": { 00:12:25.192 "lvol": { 00:12:25.192 "lvol_store_uuid": "4a586ec6-734a-4094-9fc3-1da3ab2f1dc8", 00:12:25.192 "base_bdev": "aio_bdev", 00:12:25.192 "thin_provision": false, 00:12:25.192 "num_allocated_clusters": 38, 00:12:25.192 "snapshot": false, 00:12:25.192 "clone": false, 00:12:25.192 "esnap_clone": false 00:12:25.192 } 00:12:25.192 } 00:12:25.192 } 00:12:25.192 ] 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:25.450 11:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:25.708 11:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:25.708 11:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f96a09c9-c01d-4d63-96be-1e3b352b8dea 00:12:26.274 11:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a586ec6-734a-4094-9fc3-1da3ab2f1dc8 00:12:26.274 11:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:26.532 11:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.532 00:12:26.532 real 0m17.463s 00:12:26.532 user 0m16.935s 00:12:26.532 sys 0m1.895s 00:12:26.532 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:26.532 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:26.532 ************************************ 00:12:26.532 END TEST lvs_grow_clean 00:12:26.532 ************************************ 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:26.790 ************************************ 00:12:26.790 START TEST lvs_grow_dirty 00:12:26.790 ************************************ 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.790 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.049 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:27.049 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:27.307 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:27.307 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:27.307 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:27.564 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:27.564 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:27.564 11:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 lvol 150 00:12:27.822 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:27.822 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.822 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:27.822 [2024-07-12 11:49:17.292040] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:27.822 [2024-07-12 11:49:17.292134] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:27.822 true 00:12:27.822 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:27.822 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:28.080 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:28.080 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:28.339 11:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:28.904 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:28.904 [2024-07-12 11:49:18.323197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.904 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=890166 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 890166 /var/tmp/bdevperf.sock 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 890166 ']' 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:29.163 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:29.163 [2024-07-12 11:49:18.619814] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:29.163 [2024-07-12 11:49:18.619906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890166 ] 00:12:29.163 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.421 [2024-07-12 11:49:18.682132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.421 [2024-07-12 11:49:18.803278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.679 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:29.679 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:12:29.679 11:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:29.937 Nvme0n1 00:12:29.937 11:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:30.195 [ 00:12:30.195 { 00:12:30.195 "name": "Nvme0n1", 00:12:30.195 "aliases": [ 00:12:30.195 "3843989f-558f-4114-bb11-3a4f2cb473d0" 00:12:30.195 ], 00:12:30.195 "product_name": "NVMe disk", 00:12:30.195 "block_size": 4096, 00:12:30.195 "num_blocks": 38912, 00:12:30.195 "uuid": "3843989f-558f-4114-bb11-3a4f2cb473d0", 00:12:30.195 "assigned_rate_limits": { 00:12:30.195 "rw_ios_per_sec": 0, 00:12:30.195 "rw_mbytes_per_sec": 0, 00:12:30.195 "r_mbytes_per_sec": 0, 00:12:30.195 "w_mbytes_per_sec": 0 00:12:30.195 }, 00:12:30.195 "claimed": false, 00:12:30.195 "zoned": false, 00:12:30.195 "supported_io_types": { 00:12:30.195 "read": true, 00:12:30.195 "write": true, 00:12:30.195 "unmap": true, 00:12:30.195 "write_zeroes": true, 00:12:30.195 "flush": true, 00:12:30.195 "reset": true, 00:12:30.195 "compare": true, 00:12:30.195 "compare_and_write": true, 00:12:30.195 "abort": true, 00:12:30.195 "nvme_admin": true, 00:12:30.195 "nvme_io": true 00:12:30.195 }, 00:12:30.195 "memory_domains": [ 00:12:30.195 { 00:12:30.195 "dma_device_id": "system", 00:12:30.195 "dma_device_type": 1 00:12:30.195 } 00:12:30.195 ], 00:12:30.195 "driver_specific": { 00:12:30.195 "nvme": [ 00:12:30.195 { 00:12:30.195 "trid": { 00:12:30.195 "trtype": "TCP", 00:12:30.195 "adrfam": "IPv4", 00:12:30.195 "traddr": "10.0.0.2", 00:12:30.195 "trsvcid": "4420", 00:12:30.195 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:30.195 }, 00:12:30.195 "ctrlr_data": { 00:12:30.195 "cntlid": 1, 00:12:30.195 "vendor_id": "0x8086", 00:12:30.195 "model_number": "SPDK bdev Controller", 00:12:30.195 "serial_number": "SPDK0", 00:12:30.195 "firmware_revision": "24.09", 00:12:30.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:30.195 "oacs": { 00:12:30.195 "security": 0, 00:12:30.195 "format": 0, 00:12:30.195 "firmware": 0, 00:12:30.195 "ns_manage": 0 00:12:30.195 }, 00:12:30.195 "multi_ctrlr": true, 00:12:30.195 "ana_reporting": false 00:12:30.195 }, 00:12:30.195 "vs": { 00:12:30.195 "nvme_version": "1.3" 00:12:30.195 }, 00:12:30.195 "ns_data": { 00:12:30.195 "id": 1, 00:12:30.195 "can_share": true 00:12:30.195 } 00:12:30.195 } 00:12:30.195 ], 00:12:30.195 "mp_policy": "active_passive" 00:12:30.195 } 00:12:30.195 } 00:12:30.195 ] 00:12:30.195 11:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=890299 00:12:30.195 11:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:30.195 11:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:30.195 Running I/O for 10 seconds... 00:12:31.163 Latency(us) 00:12:31.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.163 Nvme0n1 : 1.00 14036.00 54.83 0.00 0.00 0.00 0.00 0.00 00:12:31.163 =================================================================================================================== 00:12:31.163 Total : 14036.00 54.83 0.00 0.00 0.00 0.00 0.00 00:12:31.163 00:12:32.096 11:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:32.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.096 Nvme0n1 : 2.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:12:32.096 =================================================================================================================== 00:12:32.096 Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:12:32.096 00:12:32.354 true 00:12:32.354 11:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:32.354 11:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:32.612 11:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:32.612 11:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:32.612 11:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 890299 00:12:33.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.179 Nvme0n1 : 3.00 14478.67 56.56 0.00 0.00 0.00 0.00 0.00 00:12:33.179 =================================================================================================================== 00:12:33.179 Total : 14478.67 56.56 0.00 0.00 0.00 0.00 0.00 00:12:33.179 00:12:34.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.116 Nvme0n1 : 4.00 14637.25 57.18 0.00 0.00 0.00 0.00 0.00 00:12:34.116 =================================================================================================================== 00:12:34.116 Total : 14637.25 57.18 0.00 0.00 0.00 0.00 0.00 00:12:34.116 00:12:35.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.491 Nvme0n1 : 5.00 14644.40 57.20 0.00 0.00 0.00 0.00 0.00 00:12:35.491 =================================================================================================================== 00:12:35.491 Total : 14644.40 57.20 0.00 0.00 0.00 0.00 0.00 00:12:35.491 00:12:36.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.436 Nvme0n1 : 6.00 14670.83 57.31 0.00 0.00 0.00 0.00 0.00 00:12:36.436 =================================================================================================================== 00:12:36.436 Total : 14670.83 57.31 0.00 0.00 0.00 0.00 0.00 00:12:36.436 00:12:37.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.372 Nvme0n1 : 7.00 14752.14 57.63 0.00 0.00 0.00 0.00 0.00 00:12:37.372 =================================================================================================================== 00:12:37.372 Total : 14752.14 57.63 0.00 0.00 0.00 0.00 0.00 00:12:37.372 00:12:38.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.321 Nvme0n1 : 8.00 14797.25 57.80 0.00 0.00 0.00 0.00 0.00 00:12:38.321 =================================================================================================================== 00:12:38.321 Total : 14797.25 57.80 0.00 0.00 0.00 0.00 0.00 00:12:38.321 00:12:39.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.258 Nvme0n1 : 9.00 14825.78 57.91 0.00 0.00 0.00 0.00 0.00 00:12:39.258 =================================================================================================================== 00:12:39.258 Total : 14825.78 57.91 0.00 0.00 0.00 0.00 0.00 00:12:39.258 00:12:40.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.197 Nvme0n1 : 10.00 14880.20 58.13 0.00 0.00 0.00 0.00 0.00 00:12:40.197 =================================================================================================================== 00:12:40.197 Total : 14880.20 58.13 0.00 0.00 0.00 0.00 0.00 00:12:40.197 00:12:40.197 00:12:40.197 Latency(us) 00:12:40.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.197 Nvme0n1 : 10.00 14885.68 58.15 0.00 0.00 8593.61 2767.08 17573.36 00:12:40.197 =================================================================================================================== 00:12:40.197 Total : 14885.68 58.15 0.00 0.00 8593.61 2767.08 17573.36 00:12:40.197 0 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 890166 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 890166 ']' 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 890166 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 890166 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 890166' 00:12:40.197 killing process with pid 890166 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 890166 00:12:40.197 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.197 00:12:40.197 Latency(us) 00:12:40.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.197 =================================================================================================================== 00:12:40.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.197 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 890166 00:12:40.455 11:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.713 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.971 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:40.971 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 887680 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 887680 00:12:41.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 887680 Killed "${NVMF_APP[@]}" "$@" 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=891634 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 891634 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 891634 ']' 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:41.231 11:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:41.489 [2024-07-12 11:49:30.750768] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:41.489 [2024-07-12 11:49:30.750863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.489 [2024-07-12 11:49:30.816744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.489 [2024-07-12 11:49:30.925819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.489 [2024-07-12 11:49:30.925900] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.489 [2024-07-12 11:49:30.925915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.489 [2024-07-12 11:49:30.925926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.489 [2024-07-12 11:49:30.925935] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.489 [2024-07-12 11:49:30.925986] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.748 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:42.007 [2024-07-12 11:49:31.340770] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:42.008 [2024-07-12 11:49:31.340931] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:42.008 [2024-07-12 11:49:31.340990] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:42.008 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:42.266 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3843989f-558f-4114-bb11-3a4f2cb473d0 -t 2000 00:12:42.524 [ 00:12:42.524 { 00:12:42.524 "name": "3843989f-558f-4114-bb11-3a4f2cb473d0", 00:12:42.524 "aliases": [ 00:12:42.524 "lvs/lvol" 00:12:42.524 ], 00:12:42.524 "product_name": "Logical Volume", 00:12:42.524 "block_size": 4096, 00:12:42.524 "num_blocks": 38912, 00:12:42.524 "uuid": "3843989f-558f-4114-bb11-3a4f2cb473d0", 00:12:42.524 "assigned_rate_limits": { 00:12:42.524 "rw_ios_per_sec": 0, 00:12:42.524 "rw_mbytes_per_sec": 0, 00:12:42.524 "r_mbytes_per_sec": 0, 00:12:42.524 "w_mbytes_per_sec": 0 00:12:42.524 }, 00:12:42.524 "claimed": false, 00:12:42.524 "zoned": false, 00:12:42.524 "supported_io_types": { 00:12:42.524 "read": true, 00:12:42.524 "write": true, 00:12:42.524 "unmap": true, 00:12:42.524 "write_zeroes": true, 00:12:42.524 "flush": false, 00:12:42.524 "reset": true, 00:12:42.524 "compare": false, 00:12:42.524 "compare_and_write": false, 00:12:42.524 "abort": false, 00:12:42.524 "nvme_admin": false, 00:12:42.524 "nvme_io": false 00:12:42.524 }, 00:12:42.524 "driver_specific": { 00:12:42.524 "lvol": { 00:12:42.524 "lvol_store_uuid": "8837f7ea-3f3c-4e68-83a5-f4472e107701", 00:12:42.524 "base_bdev": "aio_bdev", 00:12:42.524 "thin_provision": false, 00:12:42.524 "num_allocated_clusters": 38, 00:12:42.524 "snapshot": false, 00:12:42.524 "clone": false, 00:12:42.524 "esnap_clone": false 00:12:42.524 } 00:12:42.524 } 00:12:42.524 } 00:12:42.524 ] 00:12:42.524 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:12:42.524 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:42.524 11:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:42.801 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:42.801 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:42.801 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:43.060 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:43.060 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:43.320 [2024-07-12 11:49:32.565553] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:43.320 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:43.580 request: 00:12:43.580 { 00:12:43.580 "uuid": "8837f7ea-3f3c-4e68-83a5-f4472e107701", 00:12:43.580 "method": "bdev_lvol_get_lvstores", 00:12:43.580 "req_id": 1 00:12:43.580 } 00:12:43.580 Got JSON-RPC error response 00:12:43.580 response: 00:12:43.580 { 00:12:43.580 "code": -19, 00:12:43.580 "message": "No such device" 00:12:43.580 } 00:12:43.580 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:12:43.580 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:43.580 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:43.580 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:43.580 11:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:43.839 aio_bdev 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:43.840 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:44.098 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3843989f-558f-4114-bb11-3a4f2cb473d0 -t 2000 00:12:44.098 [ 00:12:44.098 { 00:12:44.098 "name": "3843989f-558f-4114-bb11-3a4f2cb473d0", 00:12:44.098 "aliases": [ 00:12:44.098 "lvs/lvol" 00:12:44.098 ], 00:12:44.098 "product_name": "Logical Volume", 00:12:44.098 "block_size": 4096, 00:12:44.098 "num_blocks": 38912, 00:12:44.098 "uuid": "3843989f-558f-4114-bb11-3a4f2cb473d0", 00:12:44.098 "assigned_rate_limits": { 00:12:44.098 "rw_ios_per_sec": 0, 00:12:44.098 "rw_mbytes_per_sec": 0, 00:12:44.098 "r_mbytes_per_sec": 0, 00:12:44.098 "w_mbytes_per_sec": 0 00:12:44.098 }, 00:12:44.098 "claimed": false, 00:12:44.098 "zoned": false, 00:12:44.098 "supported_io_types": { 00:12:44.098 "read": true, 00:12:44.098 "write": true, 00:12:44.098 "unmap": true, 00:12:44.098 "write_zeroes": true, 00:12:44.098 "flush": false, 00:12:44.098 "reset": true, 00:12:44.098 "compare": false, 00:12:44.098 "compare_and_write": false, 00:12:44.098 "abort": false, 00:12:44.098 "nvme_admin": false, 00:12:44.098 "nvme_io": false 00:12:44.098 }, 00:12:44.098 "driver_specific": { 00:12:44.098 "lvol": { 00:12:44.098 "lvol_store_uuid": "8837f7ea-3f3c-4e68-83a5-f4472e107701", 00:12:44.098 "base_bdev": "aio_bdev", 00:12:44.098 "thin_provision": false, 00:12:44.098 "num_allocated_clusters": 38, 00:12:44.098 "snapshot": false, 00:12:44.098 "clone": false, 00:12:44.098 "esnap_clone": false 00:12:44.098 } 00:12:44.098 } 00:12:44.098 } 00:12:44.098 ] 00:12:44.098 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:12:44.098 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:44.098 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:44.355 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:44.355 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:44.355 11:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:44.614 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:44.614 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3843989f-558f-4114-bb11-3a4f2cb473d0 00:12:44.875 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8837f7ea-3f3c-4e68-83a5-f4472e107701 00:12:45.133 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:45.392 00:12:45.392 real 0m18.773s 00:12:45.392 user 0m47.253s 00:12:45.392 sys 0m4.885s 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:45.392 ************************************ 00:12:45.392 END TEST lvs_grow_dirty 00:12:45.392 ************************************ 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:45.392 nvmf_trace.0 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.392 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.652 rmmod nvme_tcp 00:12:45.652 rmmod nvme_fabrics 00:12:45.652 rmmod nvme_keyring 00:12:45.652 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.652 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:45.652 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:45.652 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 891634 ']' 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 891634 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 891634 ']' 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 891634 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 891634 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 891634' 00:12:45.653 killing process with pid 891634 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 891634 00:12:45.653 11:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 891634 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.913 11:49:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.850 11:49:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.850 00:12:47.850 real 0m41.663s 00:12:47.850 user 1m9.839s 00:12:47.850 sys 0m8.656s 00:12:47.850 11:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:47.850 11:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:47.850 ************************************ 00:12:47.850 END TEST nvmf_lvs_grow 00:12:47.850 ************************************ 00:12:48.113 11:49:37 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:48.113 11:49:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:48.113 11:49:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:48.113 11:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.113 ************************************ 00:12:48.113 START TEST nvmf_bdev_io_wait 00:12:48.113 ************************************ 00:12:48.113 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:48.113 * Looking for test storage... 00:12:48.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.114 11:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.018 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:50.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:50.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:50.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:50.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:50.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:12:50.019 00:12:50.019 --- 10.0.0.2 ping statistics --- 00:12:50.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.019 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:50.019 00:12:50.019 --- 10.0.0.1 ping statistics --- 00:12:50.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.019 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:50.019 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.278 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=894156 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 894156 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 894156 ']' 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.279 [2024-07-12 11:49:39.562408] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:50.279 [2024-07-12 11:49:39.562501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.279 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.279 [2024-07-12 11:49:39.627422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.279 [2024-07-12 11:49:39.740176] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.279 [2024-07-12 11:49:39.740230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.279 [2024-07-12 11:49:39.740260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.279 [2024-07-12 11:49:39.740271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.279 [2024-07-12 11:49:39.740285] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.279 [2024-07-12 11:49:39.740343] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.279 [2024-07-12 11:49:39.740403] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.279 [2024-07-12 11:49:39.740472] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.279 [2024-07-12 11:49:39.740476] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:50.279 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.538 [2024-07-12 11:49:39.877178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.538 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 Malloc0 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 [2024-07-12 11:49:39.938581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=894178 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=894180 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=894182 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.539 { 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme$subsystem", 00:12:50.539 "trtype": "$TEST_TRANSPORT", 00:12:50.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "$NVMF_PORT", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.539 "hdgst": ${hdgst:-false}, 00:12:50.539 "ddgst": ${ddgst:-false} 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 } 00:12:50.539 EOF 00:12:50.539 )") 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=894184 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.539 { 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme$subsystem", 00:12:50.539 "trtype": "$TEST_TRANSPORT", 00:12:50.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "$NVMF_PORT", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.539 "hdgst": ${hdgst:-false}, 00:12:50.539 "ddgst": ${ddgst:-false} 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 } 00:12:50.539 EOF 00:12:50.539 )") 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.539 { 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme$subsystem", 00:12:50.539 "trtype": "$TEST_TRANSPORT", 00:12:50.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "$NVMF_PORT", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.539 "hdgst": ${hdgst:-false}, 00:12:50.539 "ddgst": ${ddgst:-false} 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 } 00:12:50.539 EOF 00:12:50.539 )") 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.539 { 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme$subsystem", 00:12:50.539 "trtype": "$TEST_TRANSPORT", 00:12:50.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "$NVMF_PORT", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.539 "hdgst": ${hdgst:-false}, 00:12:50.539 "ddgst": ${ddgst:-false} 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 } 00:12:50.539 EOF 00:12:50.539 )") 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 894178 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme1", 00:12:50.539 "trtype": "tcp", 00:12:50.539 "traddr": "10.0.0.2", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "4420", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.539 "hdgst": false, 00:12:50.539 "ddgst": false 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 }' 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme1", 00:12:50.539 "trtype": "tcp", 00:12:50.539 "traddr": "10.0.0.2", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "4420", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.539 "hdgst": false, 00:12:50.539 "ddgst": false 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 }' 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme1", 00:12:50.539 "trtype": "tcp", 00:12:50.539 "traddr": "10.0.0.2", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "4420", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.539 "hdgst": false, 00:12:50.539 "ddgst": false 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 }' 00:12:50.539 11:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.539 "params": { 00:12:50.539 "name": "Nvme1", 00:12:50.539 "trtype": "tcp", 00:12:50.539 "traddr": "10.0.0.2", 00:12:50.539 "adrfam": "ipv4", 00:12:50.539 "trsvcid": "4420", 00:12:50.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.539 "hdgst": false, 00:12:50.539 "ddgst": false 00:12:50.539 }, 00:12:50.539 "method": "bdev_nvme_attach_controller" 00:12:50.539 }' 00:12:50.539 [2024-07-12 11:49:39.986634] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:50.539 [2024-07-12 11:49:39.986634] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:50.539 [2024-07-12 11:49:39.986637] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:50.539 [2024-07-12 11:49:39.986641] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:50.539 [2024-07-12 11:49:39.986721] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 11:49:39.986723] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 11:49:39.986723] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:50.540 [2024-07-12 11:49:39.986723] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:50.540 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:50.540 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.797 [2024-07-12 11:49:40.167689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.797 [2024-07-12 11:49:40.267622] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.797 [2024-07-12 11:49:40.268961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.055 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.055 [2024-07-12 11:49:40.367658] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:12:51.055 [2024-07-12 11:49:40.371455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.055 [2024-07-12 11:49:40.442030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.055 [2024-07-12 11:49:40.469982] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:12:51.055 [2024-07-12 11:49:40.533234] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:12:51.314 Running I/O for 1 seconds... 00:12:51.314 Running I/O for 1 seconds... 00:12:51.314 Running I/O for 1 seconds... 00:12:51.314 Running I/O for 1 seconds... 00:12:52.252 00:12:52.252 Latency(us) 00:12:52.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.252 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:52.252 Nvme1n1 : 1.01 8896.33 34.75 0.00 0.00 14322.53 8301.23 21456.97 00:12:52.252 =================================================================================================================== 00:12:52.252 Total : 8896.33 34.75 0.00 0.00 14322.53 8301.23 21456.97 00:12:52.252 00:12:52.252 Latency(us) 00:12:52.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.252 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:52.252 Nvme1n1 : 1.01 8039.76 31.41 0.00 0.00 15832.94 10194.49 28350.39 00:12:52.252 =================================================================================================================== 00:12:52.252 Total : 8039.76 31.41 0.00 0.00 15832.94 10194.49 28350.39 00:12:52.252 00:12:52.252 Latency(us) 00:12:52.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.252 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:52.252 Nvme1n1 : 1.00 198749.02 776.36 0.00 0.00 641.50 273.07 855.61 00:12:52.252 =================================================================================================================== 00:12:52.252 Total : 198749.02 776.36 0.00 0.00 641.50 273.07 855.61 00:12:52.252 00:12:52.252 Latency(us) 00:12:52.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.252 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:52.252 Nvme1n1 : 1.01 9708.30 37.92 0.00 0.00 13133.52 2706.39 19515.16 00:12:52.252 =================================================================================================================== 00:12:52.252 Total : 9708.30 37.92 0.00 0.00 13133.52 2706.39 19515.16 00:12:52.510 11:49:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 894180 00:12:52.510 11:49:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 894182 00:12:52.510 11:49:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 894184 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.770 rmmod nvme_tcp 00:12:52.770 rmmod nvme_fabrics 00:12:52.770 rmmod nvme_keyring 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 894156 ']' 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 894156 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 894156 ']' 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 894156 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 894156 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 894156' 00:12:52.770 killing process with pid 894156 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 894156 00:12:52.770 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 894156 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.029 11:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.935 11:49:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.935 00:12:54.935 real 0m7.050s 00:12:54.935 user 0m15.547s 00:12:54.935 sys 0m3.689s 00:12:54.935 11:49:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:54.935 11:49:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 ************************************ 00:12:54.935 END TEST nvmf_bdev_io_wait 00:12:54.935 ************************************ 00:12:54.935 11:49:44 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:54.935 11:49:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:54.935 11:49:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:54.935 11:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.193 ************************************ 00:12:55.193 START TEST nvmf_queue_depth 00:12:55.193 ************************************ 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:55.193 * Looking for test storage... 00:12:55.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.193 11:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:12:57.099 00:12:57.099 --- 10.0.0.2 ping statistics --- 00:12:57.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.099 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:57.099 00:12:57.099 --- 10.0.0.1 ping statistics --- 00:12:57.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.099 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=896403 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:57.099 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 896403 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 896403 ']' 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:57.100 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.100 [2024-07-12 11:49:46.559837] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:57.100 [2024-07-12 11:49:46.559929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.359 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.359 [2024-07-12 11:49:46.626619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.359 [2024-07-12 11:49:46.742114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.359 [2024-07-12 11:49:46.742182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.359 [2024-07-12 11:49:46.742199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.359 [2024-07-12 11:49:46.742214] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.359 [2024-07-12 11:49:46.742226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.359 [2024-07-12 11:49:46.742256] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 [2024-07-12 11:49:46.892300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 Malloc0 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 [2024-07-12 11:49:46.959691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=896424 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 896424 /var/tmp/bdevperf.sock 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 896424 ']' 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:57.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:57.618 11:49:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 [2024-07-12 11:49:47.006144] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:12:57.618 [2024-07-12 11:49:47.006237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896424 ] 00:12:57.618 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.618 [2024-07-12 11:49:47.067876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.876 [2024-07-12 11:49:47.185559] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.876 NVMe0n1 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.876 11:49:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:58.134 Running I/O for 10 seconds... 00:13:08.118 00:13:08.118 Latency(us) 00:13:08.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.118 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:08.118 Verification LBA range: start 0x0 length 0x4000 00:13:08.118 NVMe0n1 : 10.09 8476.07 33.11 0.00 0.00 120199.65 22427.88 74565.40 00:13:08.118 =================================================================================================================== 00:13:08.118 Total : 8476.07 33.11 0.00 0.00 120199.65 22427.88 74565.40 00:13:08.118 0 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 896424 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 896424 ']' 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 896424 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:08.118 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 896424 00:13:08.376 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:08.376 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:08.376 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 896424' 00:13:08.376 killing process with pid 896424 00:13:08.376 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 896424 00:13:08.376 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.376 00:13:08.376 Latency(us) 00:13:08.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.376 =================================================================================================================== 00:13:08.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:08.376 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 896424 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.635 rmmod nvme_tcp 00:13:08.635 rmmod nvme_fabrics 00:13:08.635 rmmod nvme_keyring 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 896403 ']' 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 896403 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 896403 ']' 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 896403 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 896403 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 896403' 00:13:08.635 killing process with pid 896403 00:13:08.635 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 896403 00:13:08.636 11:49:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 896403 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.895 11:49:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.425 11:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.425 00:13:11.425 real 0m15.887s 00:13:11.425 user 0m22.507s 00:13:11.425 sys 0m2.937s 00:13:11.425 11:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:11.425 11:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:11.425 ************************************ 00:13:11.425 END TEST nvmf_queue_depth 00:13:11.425 ************************************ 00:13:11.425 11:50:00 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:11.425 11:50:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:11.425 11:50:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:11.425 11:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.425 ************************************ 00:13:11.425 START TEST nvmf_target_multipath 00:13:11.425 ************************************ 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:11.425 * Looking for test storage... 00:13:11.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.425 11:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.357 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:13.358 00:13:13.358 --- 10.0.0.2 ping statistics --- 00:13:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.358 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:13.358 00:13:13.358 --- 10.0.0.1 ping statistics --- 00:13:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.358 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:13.358 only one NIC for nvmf test 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.358 rmmod nvme_tcp 00:13:13.358 rmmod nvme_fabrics 00:13:13.358 rmmod nvme_keyring 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.358 11:50:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.265 00:13:15.265 real 0m4.226s 00:13:15.265 user 0m0.764s 00:13:15.265 sys 0m1.456s 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:15.265 11:50:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:15.265 ************************************ 00:13:15.265 END TEST nvmf_target_multipath 00:13:15.265 ************************************ 00:13:15.265 11:50:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:15.265 11:50:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:15.265 11:50:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:15.265 11:50:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.265 ************************************ 00:13:15.265 START TEST nvmf_zcopy 00:13:15.265 ************************************ 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:15.265 * Looking for test storage... 00:13:15.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.265 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.266 11:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:13:17.801 00:13:17.801 --- 10.0.0.2 ping statistics --- 00:13:17.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.801 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:13:17.801 00:13:17.801 --- 10.0.0.1 ping statistics --- 00:13:17.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.801 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.801 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=901603 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 901603 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 901603 ']' 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:17.802 11:50:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.802 [2024-07-12 11:50:07.040793] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:13:17.802 [2024-07-12 11:50:07.040887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.802 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.802 [2024-07-12 11:50:07.114031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.802 [2024-07-12 11:50:07.235787] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.802 [2024-07-12 11:50:07.235857] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.802 [2024-07-12 11:50:07.235884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.802 [2024-07-12 11:50:07.235898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.802 [2024-07-12 11:50:07.235910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.802 [2024-07-12 11:50:07.235940] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 [2024-07-12 11:50:07.391598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 [2024-07-12 11:50:07.407788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 malloc0 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.062 { 00:13:18.062 "params": { 00:13:18.062 "name": "Nvme$subsystem", 00:13:18.062 "trtype": "$TEST_TRANSPORT", 00:13:18.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.062 "adrfam": "ipv4", 00:13:18.062 "trsvcid": "$NVMF_PORT", 00:13:18.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.062 "hdgst": ${hdgst:-false}, 00:13:18.062 "ddgst": ${ddgst:-false} 00:13:18.062 }, 00:13:18.062 "method": "bdev_nvme_attach_controller" 00:13:18.062 } 00:13:18.062 EOF 00:13:18.062 )") 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:18.062 11:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.062 "params": { 00:13:18.062 "name": "Nvme1", 00:13:18.062 "trtype": "tcp", 00:13:18.062 "traddr": "10.0.0.2", 00:13:18.062 "adrfam": "ipv4", 00:13:18.062 "trsvcid": "4420", 00:13:18.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.062 "hdgst": false, 00:13:18.062 "ddgst": false 00:13:18.062 }, 00:13:18.062 "method": "bdev_nvme_attach_controller" 00:13:18.062 }' 00:13:18.062 [2024-07-12 11:50:07.492021] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:13:18.062 [2024-07-12 11:50:07.492107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901625 ] 00:13:18.062 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.321 [2024-07-12 11:50:07.561541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.321 [2024-07-12 11:50:07.685591] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.581 Running I/O for 10 seconds... 00:13:30.801 00:13:30.801 Latency(us) 00:13:30.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.801 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:30.801 Verification LBA range: start 0x0 length 0x1000 00:13:30.801 Nvme1n1 : 10.02 5728.38 44.75 0.00 0.00 22284.10 2852.03 33593.27 00:13:30.802 =================================================================================================================== 00:13:30.802 Total : 5728.38 44.75 0.00 0.00 22284.10 2852.03 33593.27 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=902939 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:30.802 { 00:13:30.802 "params": { 00:13:30.802 "name": "Nvme$subsystem", 00:13:30.802 "trtype": "$TEST_TRANSPORT", 00:13:30.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.802 "adrfam": "ipv4", 00:13:30.802 "trsvcid": "$NVMF_PORT", 00:13:30.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.802 "hdgst": ${hdgst:-false}, 00:13:30.802 "ddgst": ${ddgst:-false} 00:13:30.802 }, 00:13:30.802 "method": "bdev_nvme_attach_controller" 00:13:30.802 } 00:13:30.802 EOF 00:13:30.802 )") 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:30.802 [2024-07-12 11:50:18.355062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.355103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:30.802 11:50:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:30.802 "params": { 00:13:30.802 "name": "Nvme1", 00:13:30.802 "trtype": "tcp", 00:13:30.802 "traddr": "10.0.0.2", 00:13:30.802 "adrfam": "ipv4", 00:13:30.802 "trsvcid": "4420", 00:13:30.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.802 "hdgst": false, 00:13:30.802 "ddgst": false 00:13:30.802 }, 00:13:30.802 "method": "bdev_nvme_attach_controller" 00:13:30.802 }' 00:13:30.802 [2024-07-12 11:50:18.363005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.363029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.371024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.371047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.379041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.379062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.387063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.387083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.392257] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:13:30.802 [2024-07-12 11:50:18.392329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902939 ] 00:13:30.802 [2024-07-12 11:50:18.395084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.395105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.403107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.403128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.411125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.411167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.419162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.419182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.802 [2024-07-12 11:50:18.427186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.427221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.435216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.435240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.443243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.443264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.451273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.451298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.455341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.802 [2024-07-12 11:50:18.459291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.459316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.467345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.467383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.475339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.475364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.483359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.483383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.491379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.491403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.499401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.499425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.507424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.507449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.515446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.515470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.523491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.523526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.531510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.531544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.539511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.539536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.547533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.547557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.555556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.555589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.563578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.563601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.571600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.571625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.578133] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.802 [2024-07-12 11:50:18.579621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.579645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.587643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.587667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.595688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.595723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.603723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.603763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.611741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.611780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.619767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.619806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.627785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.627823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.635809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.635848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.643808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.802 [2024-07-12 11:50:18.643835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.802 [2024-07-12 11:50:18.651841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.651952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.659881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.659927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.667916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.667951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.675899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.675935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.683928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.683949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.691949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.691970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.699985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.700010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.707991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.708015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.716007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.716029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.724037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.724061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.732055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.732078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.740077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.740098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.748098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.748118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.756138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.756173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.764167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.764188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.772174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.772215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.780224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.780250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.788240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.788266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.796280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.796306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.804299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.804327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 Running I/O for 5 seconds... 00:13:30.803 [2024-07-12 11:50:18.812313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.812338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.827130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.827173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.838309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.838336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.849589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.849615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.861002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.861035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.872641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.872674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.885255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.885283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.895041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.895069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.906488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.906514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.917582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.917609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.928790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.928820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.941673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.941700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.951500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.951526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.963330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.963357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.974311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.974337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.985959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.985986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:18.996730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:18.996756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.007575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.007602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.018319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.018347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.031094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.031123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.043121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.043148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.052028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.052055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.063696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.063723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.074357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.074388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.085561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.085595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.097227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.097258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.108654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.108681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.121803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.121831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.132415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.132442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.143035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.143062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.153781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.153807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.164556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.164587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.803 [2024-07-12 11:50:19.175474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.803 [2024-07-12 11:50:19.175500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.186784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.186810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.197658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.197684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.208525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.208551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.219926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.219953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.231424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.231451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.242579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.242605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.253331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.253357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.264128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.264170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.275160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.275190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.286697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.286727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.298358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.298397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.309585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.309615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.321303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.321333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.332468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.332498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.344361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.344391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.355840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.355881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.368689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.368719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.378873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.378904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.390596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.390627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.402276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.402307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.413492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.413523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.425075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.425104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.436535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.436566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.449640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.449669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.460119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.460149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.471133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.471163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.484256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.484286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.494305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.494334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.505671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.505702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.517102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.517141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.528701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.528731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.540147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.540177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.551354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.551384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.562680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.562711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.574024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.574054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.585234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.585264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.596399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.596428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.610315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.610345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.621530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.621560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.633112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.633141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.644778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.644807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.656353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.656383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.669791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.669821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.680430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.680461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.691929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.691970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.703460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.703490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.716752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.716782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.728049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.728079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.739521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.739552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.751164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.751195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.762548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.762578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.773882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.804 [2024-07-12 11:50:19.773912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.804 [2024-07-12 11:50:19.785302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.785333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.796788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.796818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.808244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.808273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.820165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.820195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.831695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.831724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.842667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.842696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.854504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.854534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.866436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.866466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.877901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.877931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.889055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.889084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.900663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.900693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.912436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.912466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.924110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.924140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.935920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.935949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.947695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.947724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.958724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.958754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.970118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.970148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.981478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.981507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:19.992956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:19.992986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.004447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.004477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.016032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.016062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.027265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.027294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.040466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.040498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.051277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.051308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.062773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.062804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.079519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.079559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.089660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.089688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.101596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.101622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.112712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.112740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.124084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.124112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.135588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.135615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.146657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.146684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.157979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.158007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.169262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.169289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.180014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.180041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.192720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.192747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.203389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.203415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.213708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.213735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.224642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.224669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.237188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.237228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.247392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.247430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.257926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.257954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.268762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.268790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.281405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.281431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.805 [2024-07-12 11:50:20.291111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.805 [2024-07-12 11:50:20.291146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.302711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.302739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.313681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.313707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.324411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.324437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.335081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.335108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.345525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.345552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.355909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.355936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.366414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.366440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.377090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.377117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.387859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.387894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.398556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.398583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.409046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.409073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.419665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.419691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.432087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.432114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.442350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.442378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.453116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.453142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.465990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.466017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.477792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.477819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.487425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.487451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.498594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.498621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.510939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.510967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.521065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.521092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.531798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.531840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.543031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.543059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.064 [2024-07-12 11:50:20.553604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.064 [2024-07-12 11:50:20.553631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.564295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.564322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.575278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.575319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.588107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.588148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.599895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.599933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.609032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.609060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.620923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.620950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.633519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.633546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.643541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.643568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.654084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.654111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.664853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.664890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.675719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.675747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.687974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.688001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.697975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.698004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.709856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.709892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.721539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.721566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.735114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.735154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.745417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.745444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.756602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.756642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.769724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.769750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.780140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.780169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.791330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.791370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.324 [2024-07-12 11:50:20.802699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.324 [2024-07-12 11:50:20.802733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.325 [2024-07-12 11:50:20.813844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.325 [2024-07-12 11:50:20.813881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.825040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.825068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.838165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.838191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.848496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.848522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.860192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.860222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.871489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.871531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.882994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.883022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.893969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.893996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.905557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.905583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.916546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.916572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.929602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.929629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.940360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.940387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.951492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.951519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.964374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.964400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.974571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.974598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.986250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.986277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:20.997024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:20.997051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.008456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.008483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.019194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.019228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.030584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.030610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.041907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.041934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.053122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.053150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.063920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.063946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.584 [2024-07-12 11:50:21.075268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.584 [2024-07-12 11:50:21.075295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.086479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.086507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.099216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.099242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.108717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.108743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.120510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.120536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.131915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.131941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.142765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.142791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.154001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.154028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.164744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.164771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.177411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.177437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.187580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.187606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.198722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.198764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.212187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.212214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.222555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.222584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.233665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.233698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.245015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.245042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.256743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.256769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.268516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.268547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.279708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.279734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.290654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.290681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.303378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.303404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.313828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.313879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.843 [2024-07-12 11:50:21.325350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.843 [2024-07-12 11:50:21.325377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.336296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.336338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.347456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.347483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.360422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.360450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.370952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.370980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.382322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.382349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.395440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.395467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.406149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.406176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.417179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.417206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.430403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.430429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.440372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.440398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.451448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.451475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.465298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.465325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.476239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.476265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.487384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.487410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.498583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.498609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.509675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.509701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.521276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.521302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.532352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.532379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.545044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.545072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.554931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.554957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.565882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.565909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.577086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.577114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.103 [2024-07-12 11:50:21.587946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.103 [2024-07-12 11:50:21.587973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.598921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.598950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.610026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.610053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.621469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.621495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.633150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.633192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.644549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.644575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.657626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.657652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.668196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.668239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.679207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.679233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.691987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.692014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.703883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.703919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.713626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.713654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.725066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.725094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.736015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.736041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.746769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.746796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.757594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.757622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.770204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.770232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.781849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.781884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.790789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.790816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.802021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.802048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.814112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.814158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.823553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.823579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.834562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.834588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.845798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.845824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.364 [2024-07-12 11:50:21.856124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.364 [2024-07-12 11:50:21.856166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.866624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.866651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.877133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.877175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.888137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.888179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.900823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.900871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.912628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.912654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.921482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.921509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.933451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.933477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.944301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.944327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.954662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.954688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.965219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.965245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.975763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.975789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.986405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.986432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:21.999461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:21.999488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.009324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.009366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.019895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.019922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.030462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.030489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.040703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.040730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.051284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.051310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.063694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.063720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.073498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.073525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.084507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.084534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.095202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.095228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.624 [2024-07-12 11:50:22.106202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.624 [2024-07-12 11:50:22.106228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.116966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.116994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.129690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.129717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.139932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.139959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.150413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.150440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.161167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.161193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.171872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.171899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.182714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.182741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.192917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.884 [2024-07-12 11:50:22.192944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.884 [2024-07-12 11:50:22.203659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.203685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.216002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.216029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.226215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.226241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.236968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.236995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.247734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.247775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.258246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.258286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.268917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.268943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.280540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.280574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.292351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.292377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.305528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.305555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.316241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.316268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.327489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.327515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.340418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.340444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.351147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.351174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.361980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.362007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.885 [2024-07-12 11:50:22.372921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.885 [2024-07-12 11:50:22.372948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.384055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.384084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.395297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.395324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.406598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.406625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.417484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.417511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.428350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.428378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.442021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.442049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.452792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.452818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.463770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.463797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.476560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.476587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.486332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.486359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.498129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.498178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.511401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.511428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.522209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.522252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.533609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.533636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.544441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.544469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.555218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.555245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.567069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.567096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.578244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.578271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.589765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.589792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.601289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.601315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.612355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.612382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.624207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.624233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.144 [2024-07-12 11:50:22.635309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.144 [2024-07-12 11:50:22.635336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.648205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.648232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.658287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.658314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.669413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.669439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.682714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.682741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.693176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.693203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.704256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.704282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.716667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.716701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.726435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.726475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.738008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.738034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.749092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.749120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.760459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.760485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.771427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.771454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.782606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.782634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.795978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.796005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.806756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.806786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.817708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.817734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.830782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.830808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.841733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.841759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.852676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.852719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.865658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.865686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.877752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.877779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.403 [2024-07-12 11:50:22.887060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.403 [2024-07-12 11:50:22.887091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.898962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.898991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.910216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.910242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.921413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.921441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.934926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.934974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.945448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.945475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.662 [2024-07-12 11:50:22.956362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.662 [2024-07-12 11:50:22.956389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:22.967579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:22.967605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:22.978280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:22.978306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:22.991242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:22.991268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.001713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.001740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.013240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.013266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.024463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.024490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.037490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.037515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.047961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.047988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.058340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.058366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.071448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.071475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.081739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.081766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.092987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.093014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.104449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.104476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.115467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.115493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.128681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.128707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.139382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.139409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.663 [2024-07-12 11:50:23.150075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.663 [2024-07-12 11:50:23.150103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.161097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.161124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.172175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.172202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.185239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.185265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.195739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.195765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.206853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.206889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.220254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.220280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.230947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.230974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.242567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.242594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.253930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.253957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.264839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.264874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.279439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.279468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.288926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.288954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.300561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.300587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.313031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.313058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.323109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.323136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.334258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.334300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.346940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.346966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.356774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.356800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.367220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.367246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.378088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.378115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.390537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.390578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.402828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.402879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.924 [2024-07-12 11:50:23.412111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.924 [2024-07-12 11:50:23.412138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.422972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.423000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.433983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.434010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.445410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.445436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.456027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.456054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.466991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.467017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.479713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.183 [2024-07-12 11:50:23.479739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.183 [2024-07-12 11:50:23.491642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.491668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.500518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.500544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.511939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.511966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.522162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.522187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.532881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.532908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.545346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.545373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.554497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.554523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.565585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.565612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.576174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.576201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.587027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.587055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.599466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.599493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.609309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.609350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.620541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.620567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.631438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.631465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.642048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.642076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.652446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.652474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.663059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.663086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.184 [2024-07-12 11:50:23.673509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.184 [2024-07-12 11:50:23.673536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.684106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.684134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.694925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.694952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.705470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.705497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.716214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.716241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.727007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.727034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.737705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.737732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.748175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.748202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.758670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.758696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.769429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.769455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.780117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.780144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.792723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.792750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.802791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.802817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.813527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.813553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.825895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.825922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.833969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.833994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 00:13:34.444 Latency(us) 00:13:34.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.444 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:34.444 Nvme1n1 : 5.01 11528.31 90.06 0.00 0.00 11088.20 4490.43 23884.23 00:13:34.444 =================================================================================================================== 00:13:34.444 Total : 11528.31 90.06 0.00 0.00 11088.20 4490.43 23884.23 00:13:34.444 [2024-07-12 11:50:23.842015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.842040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.850003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.850026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.858043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.858074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.866099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.866143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.874110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.874162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.882139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.882185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.890152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.890197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.898189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.898238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.906203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.906248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.914227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.914285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.922251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.922297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.444 [2024-07-12 11:50:23.930283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.444 [2024-07-12 11:50:23.930330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.938297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.938345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.946310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.946356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.954333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.954378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.962354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.962399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.970379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.970423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.978368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.978398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.986379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.986404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:23.994399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:23.994424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.704 [2024-07-12 11:50:24.002423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.704 [2024-07-12 11:50:24.002447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.010432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.010453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.018526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.018575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.026538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.026584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.034526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.034556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.042536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.042560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.050556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.050581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.058581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.058605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.066593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.066624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.074675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.074719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.082693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.082737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.090677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.090701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.098697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.098721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 [2024-07-12 11:50:24.106719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.705 [2024-07-12 11:50:24.106743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (902939) - No such process 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 902939 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.705 delay0 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.705 11:50:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:34.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.964 [2024-07-12 11:50:24.227842] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.534 Initializing NVMe Controllers 00:13:41.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:41.534 Initialization complete. Launching workers. 00:13:41.534 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 49 00:13:41.534 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 336, failed to submit 33 00:13:41.534 success 119, unsuccess 217, failed 0 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.534 rmmod nvme_tcp 00:13:41.534 rmmod nvme_fabrics 00:13:41.534 rmmod nvme_keyring 00:13:41.534 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 901603 ']' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 901603 ']' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 901603' 00:13:41.535 killing process with pid 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 901603 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.535 11:50:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.482 11:50:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.482 00:13:43.483 real 0m28.046s 00:13:43.483 user 0m41.404s 00:13:43.483 sys 0m8.347s 00:13:43.483 11:50:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:43.483 11:50:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.483 ************************************ 00:13:43.483 END TEST nvmf_zcopy 00:13:43.483 ************************************ 00:13:43.483 11:50:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.483 11:50:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:43.483 11:50:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:43.483 11:50:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.483 ************************************ 00:13:43.483 START TEST nvmf_nmic 00:13:43.483 ************************************ 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.483 * Looking for test storage... 00:13:43.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.483 11:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:46.011 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:46.011 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:46.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:46.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.011 11:50:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:46.011 00:13:46.011 --- 10.0.0.2 ping statistics --- 00:13:46.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.011 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:46.011 00:13:46.011 --- 10.0.0.1 ping statistics --- 00:13:46.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.011 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.011 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=906319 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 906319 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 906319 ']' 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.012 [2024-07-12 11:50:35.129242] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:13:46.012 [2024-07-12 11:50:35.129325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.012 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.012 [2024-07-12 11:50:35.201614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.012 [2024-07-12 11:50:35.317042] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.012 [2024-07-12 11:50:35.317096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.012 [2024-07-12 11:50:35.317126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.012 [2024-07-12 11:50:35.317138] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.012 [2024-07-12 11:50:35.317148] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.012 [2024-07-12 11:50:35.317276] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.012 [2024-07-12 11:50:35.318900] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.012 [2024-07-12 11:50:35.318929] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.012 [2024-07-12 11:50:35.318933] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.012 [2024-07-12 11:50:35.478698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.012 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.271 Malloc0 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.271 [2024-07-12 11:50:35.530273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:46.271 test case1: single bdev can't be used in multiple subsystems 00:13:46.271 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 [2024-07-12 11:50:35.554166] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:46.272 [2024-07-12 11:50:35.554196] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:46.272 [2024-07-12 11:50:35.554211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.272 request: 00:13:46.272 { 00:13:46.272 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.272 "namespace": { 00:13:46.272 "bdev_name": "Malloc0", 00:13:46.272 "no_auto_visible": false 00:13:46.272 }, 00:13:46.272 "method": "nvmf_subsystem_add_ns", 00:13:46.272 "req_id": 1 00:13:46.272 } 00:13:46.272 Got JSON-RPC error response 00:13:46.272 response: 00:13:46.272 { 00:13:46.272 "code": -32602, 00:13:46.272 "message": "Invalid parameters" 00:13:46.272 } 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:46.272 Adding namespace failed - expected result. 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:46.272 test case2: host connect to nvmf target in multiple paths 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 [2024-07-12 11:50:35.562266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.272 11:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.842 11:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:47.779 11:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.779 11:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:13:47.779 11:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.779 11:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:47.779 11:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:13:49.682 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:49.682 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:49.683 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.683 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:49.683 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.683 11:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:13:49.683 11:50:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:49.683 [global] 00:13:49.683 thread=1 00:13:49.683 invalidate=1 00:13:49.683 rw=write 00:13:49.683 time_based=1 00:13:49.683 runtime=1 00:13:49.683 ioengine=libaio 00:13:49.683 direct=1 00:13:49.683 bs=4096 00:13:49.683 iodepth=1 00:13:49.683 norandommap=0 00:13:49.683 numjobs=1 00:13:49.683 00:13:49.683 verify_dump=1 00:13:49.683 verify_backlog=512 00:13:49.683 verify_state_save=0 00:13:49.683 do_verify=1 00:13:49.683 verify=crc32c-intel 00:13:49.683 [job0] 00:13:49.683 filename=/dev/nvme0n1 00:13:49.683 Could not set queue depth (nvme0n1) 00:13:49.683 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.683 fio-3.35 00:13:49.683 Starting 1 thread 00:13:51.058 00:13:51.058 job0: (groupid=0, jobs=1): err= 0: pid=906841: Fri Jul 12 11:50:40 2024 00:13:51.058 read: IOPS=2170, BW=8683KiB/s (8892kB/s)(8692KiB/1001msec) 00:13:51.058 slat (nsec): min=5482, max=47587, avg=10813.61, stdev=4891.66 00:13:51.058 clat (usec): min=180, max=536, avg=222.88, stdev=21.63 00:13:51.058 lat (usec): min=188, max=554, avg=233.69, stdev=24.55 00:13:51.058 clat percentiles (usec): 00:13:51.058 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:13:51.058 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:13:51.058 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:13:51.058 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 375], 99.95th=[ 383], 00:13:51.058 | 99.99th=[ 537] 00:13:51.058 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:51.058 slat (usec): min=7, max=32985, avg=25.67, stdev=651.71 00:13:51.058 clat (usec): min=131, max=328, avg=160.34, stdev=18.17 00:13:51.058 lat (usec): min=139, max=33262, avg=186.01, stdev=654.37 00:13:51.058 clat percentiles (usec): 00:13:51.058 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:13:51.058 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:13:51.058 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:13:51.058 | 99.00th=[ 208], 99.50th=[ 229], 99.90th=[ 277], 99.95th=[ 297], 00:13:51.058 | 99.99th=[ 330] 00:13:51.058 bw ( KiB/s): min= 8744, max= 8744, per=85.48%, avg=8744.00, stdev= 0.00, samples=1 00:13:51.058 iops : min= 2186, max= 2186, avg=2186.00, stdev= 0.00, samples=1 00:13:51.058 lat (usec) : 250=95.90%, 500=4.08%, 750=0.02% 00:13:51.058 cpu : usr=4.90%, sys=6.90%, ctx=4735, majf=0, minf=2 00:13:51.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.058 issued rwts: total=2173,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.058 00:13:51.058 Run status group 0 (all jobs): 00:13:51.058 READ: bw=8683KiB/s (8892kB/s), 8683KiB/s-8683KiB/s (8892kB/s-8892kB/s), io=8692KiB (8901kB), run=1001-1001msec 00:13:51.058 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:51.058 00:13:51.058 Disk stats (read/write): 00:13:51.058 nvme0n1: ios=2073/2128, merge=0/0, ticks=1394/334, in_queue=1728, util=98.90% 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.058 rmmod nvme_tcp 00:13:51.058 rmmod nvme_fabrics 00:13:51.058 rmmod nvme_keyring 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 906319 ']' 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 906319 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 906319 ']' 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 906319 00:13:51.058 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 906319 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 906319' 00:13:51.059 killing process with pid 906319 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 906319 00:13:51.059 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 906319 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.315 11:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.856 11:50:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.856 00:13:53.856 real 0m10.089s 00:13:53.856 user 0m22.542s 00:13:53.856 sys 0m2.518s 00:13:53.856 11:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:53.856 11:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:53.856 ************************************ 00:13:53.856 END TEST nvmf_nmic 00:13:53.856 ************************************ 00:13:53.856 11:50:42 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:53.856 11:50:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:53.856 11:50:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:53.856 11:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.856 ************************************ 00:13:53.856 START TEST nvmf_fio_target 00:13:53.856 ************************************ 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:53.856 * Looking for test storage... 00:13:53.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.856 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.857 11:50:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:55.762 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:55.763 00:13:55.763 --- 10.0.0.2 ping statistics --- 00:13:55.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.763 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:55.763 00:13:55.763 --- 10.0.0.1 ping statistics --- 00:13:55.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.763 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=908909 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 908909 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 908909 ']' 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:55.763 11:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.763 [2024-07-12 11:50:45.037402] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:13:55.764 [2024-07-12 11:50:45.037480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.764 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.764 [2024-07-12 11:50:45.108119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.764 [2024-07-12 11:50:45.229263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.764 [2024-07-12 11:50:45.229328] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.764 [2024-07-12 11:50:45.229345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.764 [2024-07-12 11:50:45.229359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.764 [2024-07-12 11:50:45.229371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.764 [2024-07-12 11:50:45.229456] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.764 [2024-07-12 11:50:45.229516] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.764 [2024-07-12 11:50:45.230018] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.764 [2024-07-12 11:50:45.230026] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.700 11:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.958 [2024-07-12 11:50:46.218495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.958 11:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.215 11:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:57.215 11:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.473 11:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:57.473 11:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.732 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:57.732 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.988 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:57.988 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:58.246 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.502 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:58.502 11:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.760 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:58.760 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.020 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:59.020 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:59.278 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.536 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:59.536 11:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.794 11:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:59.794 11:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:00.054 11:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.054 [2024-07-12 11:50:49.532928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.313 11:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:00.313 11:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:00.573 11:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:14:01.511 11:50:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:14:03.431 11:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:03.431 [global] 00:14:03.431 thread=1 00:14:03.431 invalidate=1 00:14:03.431 rw=write 00:14:03.431 time_based=1 00:14:03.431 runtime=1 00:14:03.431 ioengine=libaio 00:14:03.431 direct=1 00:14:03.431 bs=4096 00:14:03.431 iodepth=1 00:14:03.431 norandommap=0 00:14:03.431 numjobs=1 00:14:03.431 00:14:03.431 verify_dump=1 00:14:03.431 verify_backlog=512 00:14:03.431 verify_state_save=0 00:14:03.431 do_verify=1 00:14:03.431 verify=crc32c-intel 00:14:03.431 [job0] 00:14:03.431 filename=/dev/nvme0n1 00:14:03.431 [job1] 00:14:03.431 filename=/dev/nvme0n2 00:14:03.431 [job2] 00:14:03.431 filename=/dev/nvme0n3 00:14:03.431 [job3] 00:14:03.431 filename=/dev/nvme0n4 00:14:03.431 Could not set queue depth (nvme0n1) 00:14:03.431 Could not set queue depth (nvme0n2) 00:14:03.432 Could not set queue depth (nvme0n3) 00:14:03.432 Could not set queue depth (nvme0n4) 00:14:03.699 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.699 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.699 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.699 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.699 fio-3.35 00:14:03.699 Starting 4 threads 00:14:05.082 00:14:05.082 job0: (groupid=0, jobs=1): err= 0: pid=909993: Fri Jul 12 11:50:54 2024 00:14:05.083 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:05.083 slat (nsec): min=4952, max=52151, avg=12334.02, stdev=7382.70 00:14:05.083 clat (usec): min=185, max=41016, avg=708.82, stdev=4399.98 00:14:05.083 lat (usec): min=192, max=41047, avg=721.15, stdev=4401.08 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:14:05.083 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:14:05.083 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 269], 95.00th=[ 281], 00:14:05.083 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:05.083 | 99.99th=[41157] 00:14:05.083 write: IOPS=1436, BW=5746KiB/s (5884kB/s)(5752KiB/1001msec); 0 zone resets 00:14:05.083 slat (nsec): min=5842, max=39350, avg=11386.36, stdev=5474.66 00:14:05.083 clat (usec): min=126, max=342, avg=164.82, stdev=36.03 00:14:05.083 lat (usec): min=132, max=353, avg=176.21, stdev=35.14 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:14:05.083 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 159], 00:14:05.083 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 231], 95.00th=[ 249], 00:14:05.083 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 326], 99.95th=[ 343], 00:14:05.083 | 99.99th=[ 343] 00:14:05.083 bw ( KiB/s): min= 4096, max= 4096, per=26.18%, avg=4096.00, stdev= 0.00, samples=1 00:14:05.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:05.083 lat (usec) : 250=91.15%, 500=8.33% 00:14:05.083 lat (msec) : 20=0.04%, 50=0.49% 00:14:05.083 cpu : usr=1.60%, sys=3.10%, ctx=2463, majf=0, minf=1 00:14:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 issued rwts: total=1024,1438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.083 job1: (groupid=0, jobs=1): err= 0: pid=909994: Fri Jul 12 11:50:54 2024 00:14:05.083 read: IOPS=23, BW=94.2KiB/s (96.5kB/s)(96.0KiB/1019msec) 00:14:05.083 slat (nsec): min=6834, max=33739, avg=22652.83, stdev=10280.84 00:14:05.083 clat (usec): min=237, max=41972, avg=37856.05, stdev=11586.07 00:14:05.083 lat (usec): min=252, max=41995, avg=37878.70, stdev=11585.79 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 239], 5.00th=[ 302], 10.00th=[40633], 20.00th=[41157], 00:14:05.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:05.083 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:05.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:05.083 | 99.99th=[42206] 00:14:05.083 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:14:05.083 slat (nsec): min=6576, max=31833, avg=10879.73, stdev=4882.27 00:14:05.083 clat (usec): min=154, max=378, avg=199.23, stdev=27.43 00:14:05.083 lat (usec): min=162, max=385, avg=210.11, stdev=26.80 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:14:05.083 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:14:05.083 | 70.00th=[ 202], 80.00th=[ 221], 90.00th=[ 245], 95.00th=[ 253], 00:14:05.083 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 379], 99.95th=[ 379], 00:14:05.083 | 99.99th=[ 379] 00:14:05.083 bw ( KiB/s): min= 4096, max= 4096, per=26.18%, avg=4096.00, stdev= 0.00, samples=1 00:14:05.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:05.083 lat (usec) : 250=90.49%, 500=5.41% 00:14:05.083 lat (msec) : 50=4.10% 00:14:05.083 cpu : usr=0.88%, sys=0.10%, ctx=538, majf=0, minf=1 00:14:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.083 job2: (groupid=0, jobs=1): err= 0: pid=909995: Fri Jul 12 11:50:54 2024 00:14:05.083 read: IOPS=875, BW=3503KiB/s (3587kB/s)(3580KiB/1022msec) 00:14:05.083 slat (nsec): min=4228, max=59846, avg=12617.28, stdev=7718.29 00:14:05.083 clat (usec): min=174, max=42084, avg=904.50, stdev=5247.69 00:14:05.083 lat (usec): min=179, max=42099, avg=917.12, stdev=5249.13 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:14:05.083 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:14:05.083 | 70.00th=[ 221], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 334], 00:14:05.083 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:05.083 | 99.99th=[42206] 00:14:05.083 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:14:05.083 slat (nsec): min=5369, max=40746, avg=9186.92, stdev=4715.41 00:14:05.083 clat (usec): min=129, max=1399, avg=180.90, stdev=68.56 00:14:05.083 lat (usec): min=134, max=1408, avg=190.09, stdev=68.95 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:14:05.083 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 172], 00:14:05.083 | 70.00th=[ 186], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 253], 00:14:05.083 | 99.00th=[ 375], 99.50th=[ 461], 99.90th=[ 898], 99.95th=[ 1401], 00:14:05.083 | 99.99th=[ 1401] 00:14:05.083 bw ( KiB/s): min= 8192, max= 8192, per=52.35%, avg=8192.00, stdev= 0.00, samples=1 00:14:05.083 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:05.083 lat (usec) : 250=88.22%, 500=10.79%, 1000=0.16% 00:14:05.083 lat (msec) : 2=0.05%, 50=0.78% 00:14:05.083 cpu : usr=1.37%, sys=1.86%, ctx=1919, majf=0, minf=1 00:14:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 issued rwts: total=895,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.083 job3: (groupid=0, jobs=1): err= 0: pid=909996: Fri Jul 12 11:50:54 2024 00:14:05.083 read: IOPS=519, BW=2077KiB/s (2127kB/s)(2096KiB/1009msec) 00:14:05.083 slat (nsec): min=5607, max=56048, avg=19416.11, stdev=7696.13 00:14:05.083 clat (usec): min=199, max=42191, avg=1487.24, stdev=6814.81 00:14:05.083 lat (usec): min=206, max=42244, avg=1506.65, stdev=6815.25 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 245], 00:14:05.083 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:14:05.083 | 70.00th=[ 371], 80.00th=[ 433], 90.00th=[ 545], 95.00th=[ 578], 00:14:05.083 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:05.083 | 99.99th=[42206] 00:14:05.083 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:14:05.083 slat (nsec): min=7720, max=50643, avg=14142.92, stdev=6412.92 00:14:05.083 clat (usec): min=142, max=347, avg=192.59, stdev=27.50 00:14:05.083 lat (usec): min=151, max=360, avg=206.73, stdev=28.33 00:14:05.083 clat percentiles (usec): 00:14:05.083 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 172], 00:14:05.083 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:14:05.083 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 235], 95.00th=[ 245], 00:14:05.083 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 330], 99.95th=[ 347], 00:14:05.083 | 99.99th=[ 347] 00:14:05.083 bw ( KiB/s): min= 4096, max= 4096, per=26.18%, avg=4096.00, stdev= 0.00, samples=2 00:14:05.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:05.083 lat (usec) : 250=74.42%, 500=20.74%, 750=3.88% 00:14:05.083 lat (msec) : 50=0.97% 00:14:05.083 cpu : usr=2.28%, sys=2.68%, ctx=1550, majf=0, minf=2 00:14:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.083 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.083 00:14:05.083 Run status group 0 (all jobs): 00:14:05.083 READ: bw=9656KiB/s (9887kB/s), 94.2KiB/s-4092KiB/s (96.5kB/s-4190kB/s), io=9868KiB (10.1MB), run=1001-1022msec 00:14:05.083 WRITE: bw=15.3MiB/s (16.0MB/s), 2010KiB/s-5746KiB/s (2058kB/s-5884kB/s), io=15.6MiB (16.4MB), run=1001-1022msec 00:14:05.083 00:14:05.083 Disk stats (read/write): 00:14:05.083 nvme0n1: ios=722/1024, merge=0/0, ticks=677/164, in_queue=841, util=86.87% 00:14:05.083 nvme0n2: ios=43/512, merge=0/0, ticks=1690/103, in_queue=1793, util=97.96% 00:14:05.083 nvme0n3: ios=890/1024, merge=0/0, ticks=596/184, in_queue=780, util=89.01% 00:14:05.083 nvme0n4: ios=578/1024, merge=0/0, ticks=894/189, in_queue=1083, util=100.00% 00:14:05.083 11:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:05.083 [global] 00:14:05.083 thread=1 00:14:05.083 invalidate=1 00:14:05.083 rw=randwrite 00:14:05.083 time_based=1 00:14:05.083 runtime=1 00:14:05.083 ioengine=libaio 00:14:05.083 direct=1 00:14:05.083 bs=4096 00:14:05.083 iodepth=1 00:14:05.083 norandommap=0 00:14:05.083 numjobs=1 00:14:05.083 00:14:05.083 verify_dump=1 00:14:05.083 verify_backlog=512 00:14:05.083 verify_state_save=0 00:14:05.083 do_verify=1 00:14:05.083 verify=crc32c-intel 00:14:05.083 [job0] 00:14:05.083 filename=/dev/nvme0n1 00:14:05.083 [job1] 00:14:05.083 filename=/dev/nvme0n2 00:14:05.083 [job2] 00:14:05.083 filename=/dev/nvme0n3 00:14:05.083 [job3] 00:14:05.083 filename=/dev/nvme0n4 00:14:05.083 Could not set queue depth (nvme0n1) 00:14:05.083 Could not set queue depth (nvme0n2) 00:14:05.083 Could not set queue depth (nvme0n3) 00:14:05.083 Could not set queue depth (nvme0n4) 00:14:05.083 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.083 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.083 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.083 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.083 fio-3.35 00:14:05.083 Starting 4 threads 00:14:06.462 00:14:06.462 job0: (groupid=0, jobs=1): err= 0: pid=910340: Fri Jul 12 11:50:55 2024 00:14:06.462 read: IOPS=23, BW=94.6KiB/s (96.9kB/s)(96.0KiB/1015msec) 00:14:06.462 slat (nsec): min=6899, max=34792, avg=20444.08, stdev=8650.76 00:14:06.462 clat (usec): min=271, max=42046, avg=37703.12, stdev=11534.73 00:14:06.462 lat (usec): min=292, max=42059, avg=37723.56, stdev=11535.13 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[40633], 20.00th=[41157], 00:14:06.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:06.462 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:14:06.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:06.462 | 99.99th=[42206] 00:14:06.462 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:14:06.462 slat (nsec): min=7461, max=29722, avg=8865.88, stdev=2138.44 00:14:06.462 clat (usec): min=157, max=396, avg=195.30, stdev=15.95 00:14:06.462 lat (usec): min=165, max=405, avg=204.16, stdev=16.03 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:14:06.462 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:14:06.462 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 208], 95.00th=[ 215], 00:14:06.462 | 99.00th=[ 227], 99.50th=[ 293], 99.90th=[ 396], 99.95th=[ 396], 00:14:06.462 | 99.99th=[ 396] 00:14:06.462 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.462 lat (usec) : 250=94.96%, 500=0.93% 00:14:06.462 lat (msec) : 50=4.10% 00:14:06.462 cpu : usr=0.59%, sys=0.39%, ctx=537, majf=0, minf=1 00:14:06.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.462 job1: (groupid=0, jobs=1): err= 0: pid=910345: Fri Jul 12 11:50:55 2024 00:14:06.462 read: IOPS=1051, BW=4208KiB/s (4309kB/s)(4212KiB/1001msec) 00:14:06.462 slat (nsec): min=4720, max=59961, avg=9772.27, stdev=5918.68 00:14:06.462 clat (usec): min=178, max=42025, avg=656.92, stdev=3992.57 00:14:06.462 lat (usec): min=192, max=42044, avg=666.70, stdev=3994.37 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 221], 00:14:06.462 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:14:06.462 | 70.00th=[ 269], 80.00th=[ 310], 90.00th=[ 367], 95.00th=[ 388], 00:14:06.462 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:06.462 | 99.99th=[42206] 00:14:06.462 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:06.462 slat (nsec): min=6220, max=41542, avg=11013.91, stdev=5099.40 00:14:06.462 clat (usec): min=121, max=722, avg=175.77, stdev=45.53 00:14:06.462 lat (usec): min=127, max=730, avg=186.78, stdev=46.10 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 143], 00:14:06.462 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 178], 00:14:06.462 | 70.00th=[ 194], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 245], 00:14:06.462 | 99.00th=[ 302], 99.50th=[ 383], 99.90th=[ 709], 99.95th=[ 725], 00:14:06.462 | 99.99th=[ 725] 00:14:06.462 bw ( KiB/s): min= 8192, max= 8192, per=67.67%, avg=8192.00, stdev= 0.00, samples=1 00:14:06.462 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:06.462 lat (usec) : 250=77.21%, 500=22.05%, 750=0.35% 00:14:06.462 lat (msec) : 50=0.39% 00:14:06.462 cpu : usr=1.40%, sys=2.80%, ctx=2591, majf=0, minf=1 00:14:06.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 issued rwts: total=1053,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.462 job2: (groupid=0, jobs=1): err= 0: pid=910346: Fri Jul 12 11:50:55 2024 00:14:06.462 read: IOPS=488, BW=1954KiB/s (2001kB/s)(1956KiB/1001msec) 00:14:06.462 slat (nsec): min=5641, max=67485, avg=20514.25, stdev=9727.24 00:14:06.462 clat (usec): min=210, max=42000, avg=1799.10, stdev=7480.18 00:14:06.462 lat (usec): min=225, max=42016, avg=1819.62, stdev=7479.91 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 229], 5.00th=[ 245], 10.00th=[ 285], 20.00th=[ 330], 00:14:06.462 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 388], 00:14:06.462 | 70.00th=[ 408], 80.00th=[ 453], 90.00th=[ 523], 95.00th=[ 611], 00:14:06.462 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:14:06.462 | 99.99th=[42206] 00:14:06.462 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:06.462 slat (nsec): min=5898, max=37786, avg=7418.49, stdev=2438.42 00:14:06.462 clat (usec): min=137, max=272, avg=199.61, stdev=22.79 00:14:06.462 lat (usec): min=144, max=279, avg=207.03, stdev=23.12 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:14:06.462 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:14:06.462 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 235], 95.00th=[ 247], 00:14:06.462 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 273], 99.95th=[ 273], 00:14:06.462 | 99.99th=[ 273] 00:14:06.462 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.462 lat (usec) : 250=52.55%, 500=42.16%, 750=3.60% 00:14:06.462 lat (msec) : 50=1.70% 00:14:06.462 cpu : usr=0.70%, sys=1.50%, ctx=1001, majf=0, minf=1 00:14:06.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 issued rwts: total=489,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.462 job3: (groupid=0, jobs=1): err= 0: pid=910347: Fri Jul 12 11:50:55 2024 00:14:06.462 read: IOPS=316, BW=1267KiB/s (1297kB/s)(1268KiB/1001msec) 00:14:06.462 slat (nsec): min=6855, max=36669, avg=13294.05, stdev=6200.13 00:14:06.462 clat (usec): min=221, max=42037, avg=2772.24, stdev=9683.84 00:14:06.462 lat (usec): min=228, max=42055, avg=2785.53, stdev=9684.61 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 229], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 273], 00:14:06.462 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 355], 60.00th=[ 367], 00:14:06.462 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 445], 95.00th=[40633], 00:14:06.462 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:06.462 | 99.99th=[42206] 00:14:06.462 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:06.462 slat (nsec): min=7639, max=32618, avg=9755.83, stdev=2485.03 00:14:06.462 clat (usec): min=160, max=393, avg=208.10, stdev=19.39 00:14:06.462 lat (usec): min=168, max=402, avg=217.86, stdev=19.51 00:14:06.462 clat percentiles (usec): 00:14:06.462 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 194], 00:14:06.462 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 212], 00:14:06.462 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 237], 00:14:06.462 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 396], 99.95th=[ 396], 00:14:06.462 | 99.99th=[ 396] 00:14:06.462 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.462 lat (usec) : 250=63.93%, 500=33.66%, 750=0.12% 00:14:06.462 lat (msec) : 50=2.29% 00:14:06.462 cpu : usr=0.50%, sys=1.30%, ctx=831, majf=0, minf=2 00:14:06.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.462 issued rwts: total=317,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.462 00:14:06.462 Run status group 0 (all jobs): 00:14:06.462 READ: bw=7421KiB/s (7599kB/s), 94.6KiB/s-4208KiB/s (96.9kB/s-4309kB/s), io=7532KiB (7713kB), run=1001-1015msec 00:14:06.462 WRITE: bw=11.8MiB/s (12.4MB/s), 2018KiB/s-6138KiB/s (2066kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1015msec 00:14:06.462 00:14:06.462 Disk stats (read/write): 00:14:06.462 nvme0n1: ios=68/512, merge=0/0, ticks=1258/101, in_queue=1359, util=97.29% 00:14:06.462 nvme0n2: ios=902/1024, merge=0/0, ticks=1638/180, in_queue=1818, util=97.66% 00:14:06.462 nvme0n3: ios=232/512, merge=0/0, ticks=736/101, in_queue=837, util=88.91% 00:14:06.462 nvme0n4: ios=262/512, merge=0/0, ticks=872/99, in_queue=971, util=100.00% 00:14:06.462 11:50:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:06.462 [global] 00:14:06.462 thread=1 00:14:06.462 invalidate=1 00:14:06.462 rw=write 00:14:06.462 time_based=1 00:14:06.462 runtime=1 00:14:06.462 ioengine=libaio 00:14:06.462 direct=1 00:14:06.462 bs=4096 00:14:06.462 iodepth=128 00:14:06.462 norandommap=0 00:14:06.462 numjobs=1 00:14:06.462 00:14:06.462 verify_dump=1 00:14:06.462 verify_backlog=512 00:14:06.462 verify_state_save=0 00:14:06.462 do_verify=1 00:14:06.462 verify=crc32c-intel 00:14:06.462 [job0] 00:14:06.462 filename=/dev/nvme0n1 00:14:06.462 [job1] 00:14:06.462 filename=/dev/nvme0n2 00:14:06.462 [job2] 00:14:06.462 filename=/dev/nvme0n3 00:14:06.462 [job3] 00:14:06.462 filename=/dev/nvme0n4 00:14:06.463 Could not set queue depth (nvme0n1) 00:14:06.463 Could not set queue depth (nvme0n2) 00:14:06.463 Could not set queue depth (nvme0n3) 00:14:06.463 Could not set queue depth (nvme0n4) 00:14:06.463 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.463 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.463 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.463 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.463 fio-3.35 00:14:06.463 Starting 4 threads 00:14:07.842 00:14:07.842 job0: (groupid=0, jobs=1): err= 0: pid=910572: Fri Jul 12 11:50:57 2024 00:14:07.842 read: IOPS=5294, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1002msec) 00:14:07.842 slat (usec): min=2, max=6819, avg=85.93, stdev=457.68 00:14:07.842 clat (usec): min=634, max=18194, avg=11636.91, stdev=1991.43 00:14:07.842 lat (usec): min=2558, max=18198, avg=11722.84, stdev=1998.31 00:14:07.842 clat percentiles (usec): 00:14:07.842 | 1.00th=[ 3458], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10945], 00:14:07.842 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:14:07.842 | 70.00th=[12125], 80.00th=[12387], 90.00th=[13435], 95.00th=[14877], 00:14:07.842 | 99.00th=[17433], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:14:07.842 | 99.99th=[18220] 00:14:07.842 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:14:07.842 slat (usec): min=4, max=10943, avg=84.05, stdev=479.54 00:14:07.842 clat (usec): min=5483, max=24704, avg=11552.94, stdev=1865.66 00:14:07.842 lat (usec): min=5503, max=24716, avg=11636.99, stdev=1903.49 00:14:07.842 clat percentiles (usec): 00:14:07.842 | 1.00th=[ 8160], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:14:07.842 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11731], 60.00th=[11994], 00:14:07.842 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[15664], 00:14:07.842 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20579], 99.95th=[21890], 00:14:07.842 | 99.99th=[24773] 00:14:07.842 bw ( KiB/s): min=20920, max=24136, per=31.77%, avg=22528.00, stdev=2274.06, samples=2 00:14:07.842 iops : min= 5230, max= 6034, avg=5632.00, stdev=568.51, samples=2 00:14:07.842 lat (usec) : 750=0.01% 00:14:07.842 lat (msec) : 4=0.73%, 10=16.41%, 20=82.79%, 50=0.05% 00:14:07.842 cpu : usr=7.19%, sys=13.49%, ctx=464, majf=0, minf=13 00:14:07.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:07.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.842 issued rwts: total=5305,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.842 job1: (groupid=0, jobs=1): err= 0: pid=910574: Fri Jul 12 11:50:57 2024 00:14:07.842 read: IOPS=3857, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1004msec) 00:14:07.842 slat (usec): min=3, max=25680, avg=136.68, stdev=1027.05 00:14:07.842 clat (usec): min=455, max=77392, avg=17922.58, stdev=12438.32 00:14:07.842 lat (usec): min=7089, max=77407, avg=18059.26, stdev=12496.56 00:14:07.842 clat percentiles (usec): 00:14:07.842 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11207], 00:14:07.842 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[13566], 00:14:07.842 | 70.00th=[18482], 80.00th=[22414], 90.00th=[33817], 95.00th=[40109], 00:14:07.842 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:14:07.842 | 99.99th=[77071] 00:14:07.842 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:14:07.842 slat (usec): min=4, max=13768, avg=105.33, stdev=644.15 00:14:07.842 clat (usec): min=1128, max=57119, avg=14131.87, stdev=6512.86 00:14:07.843 lat (usec): min=1141, max=57138, avg=14237.20, stdev=6561.38 00:14:07.843 clat percentiles (usec): 00:14:07.843 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11076], 00:14:07.843 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:14:07.843 | 70.00th=[14091], 80.00th=[16450], 90.00th=[17695], 95.00th=[19792], 00:14:07.843 | 99.00th=[47449], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:14:07.843 | 99.99th=[56886] 00:14:07.843 bw ( KiB/s): min=12288, max=20480, per=23.11%, avg=16384.00, stdev=5792.62, samples=2 00:14:07.843 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:14:07.843 lat (usec) : 500=0.01% 00:14:07.843 lat (msec) : 2=0.03%, 10=7.40%, 20=77.58%, 50=13.00%, 100=1.98% 00:14:07.843 cpu : usr=4.39%, sys=8.97%, ctx=390, majf=0, minf=7 00:14:07.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:07.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.843 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.843 job2: (groupid=0, jobs=1): err= 0: pid=910575: Fri Jul 12 11:50:57 2024 00:14:07.843 read: IOPS=3622, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1011msec) 00:14:07.843 slat (usec): min=2, max=12476, avg=121.10, stdev=794.55 00:14:07.843 clat (usec): min=4981, max=39768, avg=14866.78, stdev=4141.26 00:14:07.843 lat (usec): min=4992, max=39825, avg=14987.88, stdev=4203.61 00:14:07.843 clat percentiles (usec): 00:14:07.843 | 1.00th=[ 7439], 5.00th=[10552], 10.00th=[11600], 20.00th=[12911], 00:14:07.843 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:14:07.843 | 70.00th=[14877], 80.00th=[15401], 90.00th=[19530], 95.00th=[23725], 00:14:07.843 | 99.00th=[32113], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:14:07.843 | 99.99th=[39584] 00:14:07.843 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:14:07.843 slat (usec): min=3, max=13033, avg=106.53, stdev=646.12 00:14:07.843 clat (usec): min=1107, max=60359, avg=17693.06, stdev=7921.74 00:14:07.843 lat (usec): min=1113, max=60371, avg=17799.59, stdev=7972.78 00:14:07.843 clat percentiles (usec): 00:14:07.843 | 1.00th=[ 3720], 5.00th=[ 6521], 10.00th=[ 8455], 20.00th=[11076], 00:14:07.843 | 30.00th=[12649], 40.00th=[13566], 50.00th=[15008], 60.00th=[19530], 00:14:07.843 | 70.00th=[22414], 80.00th=[24249], 90.00th=[29754], 95.00th=[32637], 00:14:07.843 | 99.00th=[33424], 99.50th=[35390], 99.90th=[60556], 99.95th=[60556], 00:14:07.843 | 99.99th=[60556] 00:14:07.843 bw ( KiB/s): min=14992, max=17376, per=22.83%, avg=16184.00, stdev=1685.74, samples=2 00:14:07.843 iops : min= 3748, max= 4344, avg=4046.00, stdev=421.44, samples=2 00:14:07.843 lat (msec) : 2=0.12%, 4=0.43%, 10=7.91%, 20=66.65%, 50=24.80% 00:14:07.843 lat (msec) : 100=0.09% 00:14:07.843 cpu : usr=3.96%, sys=5.54%, ctx=355, majf=0, minf=15 00:14:07.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:07.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.843 issued rwts: total=3662,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.843 job3: (groupid=0, jobs=1): err= 0: pid=910576: Fri Jul 12 11:50:57 2024 00:14:07.843 read: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1010msec) 00:14:07.843 slat (usec): min=3, max=31239, avg=129.77, stdev=937.09 00:14:07.843 clat (usec): min=6461, max=75446, avg=17297.91, stdev=10967.82 00:14:07.843 lat (usec): min=9561, max=75456, avg=17427.68, stdev=11015.62 00:14:07.843 clat percentiles (usec): 00:14:07.843 | 1.00th=[ 9896], 5.00th=[11469], 10.00th=[12256], 20.00th=[12649], 00:14:07.843 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13698], 60.00th=[14484], 00:14:07.843 | 70.00th=[14877], 80.00th=[16712], 90.00th=[30016], 95.00th=[36439], 00:14:07.843 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:14:07.843 | 99.99th=[74974] 00:14:07.843 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:14:07.843 slat (usec): min=3, max=17211, avg=106.49, stdev=639.68 00:14:07.843 clat (usec): min=766, max=40746, avg=14455.76, stdev=4708.90 00:14:07.843 lat (usec): min=775, max=40766, avg=14562.25, stdev=4730.59 00:14:07.843 clat percentiles (usec): 00:14:07.843 | 1.00th=[ 9503], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[12256], 00:14:07.843 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:14:07.843 | 70.00th=[14353], 80.00th=[15270], 90.00th=[19006], 95.00th=[26608], 00:14:07.843 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:14:07.843 | 99.99th=[40633] 00:14:07.843 bw ( KiB/s): min=16384, max=16384, per=23.11%, avg=16384.00, stdev= 0.00, samples=2 00:14:07.843 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:07.843 lat (usec) : 1000=0.02% 00:14:07.843 lat (msec) : 10=3.36%, 20=84.83%, 50=10.61%, 100=1.18% 00:14:07.843 cpu : usr=7.04%, sys=9.22%, ctx=330, majf=0, minf=15 00:14:07.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:07.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.843 issued rwts: total=3979,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.843 00:14:07.843 Run status group 0 (all jobs): 00:14:07.843 READ: bw=65.0MiB/s (68.1MB/s), 14.1MiB/s-20.7MiB/s (14.8MB/s-21.7MB/s), io=65.7MiB (68.9MB), run=1002-1011msec 00:14:07.843 WRITE: bw=69.2MiB/s (72.6MB/s), 15.8MiB/s-22.0MiB/s (16.6MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1002-1011msec 00:14:07.843 00:14:07.843 Disk stats (read/write): 00:14:07.843 nvme0n1: ios=4580/4608, merge=0/0, ticks=21532/20611, in_queue=42143, util=96.89% 00:14:07.843 nvme0n2: ios=3584/3584, merge=0/0, ticks=23938/21044, in_queue=44982, util=85.86% 00:14:07.843 nvme0n3: ios=3103/3578, merge=0/0, ticks=39069/53818, in_queue=92887, util=100.00% 00:14:07.843 nvme0n4: ios=3129/3422, merge=0/0, ticks=16409/15669, in_queue=32078, util=97.35% 00:14:07.843 11:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:07.843 [global] 00:14:07.843 thread=1 00:14:07.843 invalidate=1 00:14:07.843 rw=randwrite 00:14:07.843 time_based=1 00:14:07.843 runtime=1 00:14:07.843 ioengine=libaio 00:14:07.843 direct=1 00:14:07.843 bs=4096 00:14:07.843 iodepth=128 00:14:07.843 norandommap=0 00:14:07.843 numjobs=1 00:14:07.843 00:14:07.843 verify_dump=1 00:14:07.843 verify_backlog=512 00:14:07.843 verify_state_save=0 00:14:07.843 do_verify=1 00:14:07.843 verify=crc32c-intel 00:14:07.843 [job0] 00:14:07.843 filename=/dev/nvme0n1 00:14:07.843 [job1] 00:14:07.843 filename=/dev/nvme0n2 00:14:07.843 [job2] 00:14:07.843 filename=/dev/nvme0n3 00:14:07.843 [job3] 00:14:07.843 filename=/dev/nvme0n4 00:14:07.843 Could not set queue depth (nvme0n1) 00:14:07.843 Could not set queue depth (nvme0n2) 00:14:07.843 Could not set queue depth (nvme0n3) 00:14:07.843 Could not set queue depth (nvme0n4) 00:14:08.101 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.101 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.101 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.101 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.101 fio-3.35 00:14:08.101 Starting 4 threads 00:14:09.481 00:14:09.481 job0: (groupid=0, jobs=1): err= 0: pid=910811: Fri Jul 12 11:50:58 2024 00:14:09.481 read: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(12.1MiB/1048msec) 00:14:09.481 slat (usec): min=3, max=47660, avg=176.45, stdev=1143.46 00:14:09.481 clat (usec): min=593, max=59638, avg=21325.54, stdev=11309.82 00:14:09.481 lat (usec): min=5755, max=59651, avg=21501.99, stdev=11376.09 00:14:09.481 clat percentiles (usec): 00:14:09.481 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11863], 20.00th=[12387], 00:14:09.481 | 30.00th=[12649], 40.00th=[13173], 50.00th=[20055], 60.00th=[23200], 00:14:09.481 | 70.00th=[23462], 80.00th=[28443], 90.00th=[38011], 95.00th=[46924], 00:14:09.481 | 99.00th=[53216], 99.50th=[53216], 99.90th=[59507], 99.95th=[59507], 00:14:09.481 | 99.99th=[59507] 00:14:09.481 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:14:09.481 slat (usec): min=3, max=8824, avg=130.45, stdev=699.65 00:14:09.481 clat (usec): min=1279, max=91846, avg=18551.85, stdev=11631.11 00:14:09.481 lat (usec): min=1287, max=91855, avg=18682.30, stdev=11673.02 00:14:09.481 clat percentiles (usec): 00:14:09.481 | 1.00th=[ 5276], 5.00th=[10552], 10.00th=[11731], 20.00th=[11994], 00:14:09.481 | 30.00th=[12256], 40.00th=[12518], 50.00th=[16581], 60.00th=[16909], 00:14:09.481 | 70.00th=[19792], 80.00th=[23200], 90.00th=[29230], 95.00th=[34866], 00:14:09.481 | 99.00th=[78119], 99.50th=[84411], 99.90th=[91751], 99.95th=[91751], 00:14:09.481 | 99.99th=[91751] 00:14:09.481 bw ( KiB/s): min=11320, max=16584, per=20.40%, avg=13952.00, stdev=3722.21, samples=2 00:14:09.481 iops : min= 2830, max= 4146, avg=3488.00, stdev=930.55, samples=2 00:14:09.481 lat (usec) : 750=0.01% 00:14:09.481 lat (msec) : 2=0.13%, 10=3.57%, 20=56.88%, 50=36.77%, 100=2.63% 00:14:09.481 cpu : usr=4.11%, sys=5.25%, ctx=316, majf=0, minf=1 00:14:09.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:09.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.482 issued rwts: total=3104,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.482 job1: (groupid=0, jobs=1): err= 0: pid=910812: Fri Jul 12 11:50:58 2024 00:14:09.482 read: IOPS=5262, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1004msec) 00:14:09.482 slat (usec): min=2, max=19412, avg=90.07, stdev=525.51 00:14:09.482 clat (usec): min=3423, max=35534, avg=11964.08, stdev=3378.56 00:14:09.482 lat (usec): min=3429, max=35614, avg=12054.15, stdev=3403.50 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[ 7046], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10814], 00:14:09.482 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:14:09.482 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13698], 00:14:09.482 | 99.00th=[31589], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:14:09.482 | 99.99th=[35390] 00:14:09.482 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:14:09.482 slat (usec): min=4, max=3744, avg=82.86, stdev=403.98 00:14:09.482 clat (usec): min=7715, max=16834, avg=11325.99, stdev=1124.02 00:14:09.482 lat (usec): min=7755, max=16840, avg=11408.84, stdev=1130.85 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[10814], 00:14:09.482 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:14:09.482 | 70.00th=[11731], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:14:09.482 | 99.00th=[14877], 99.50th=[15664], 99.90th=[16450], 99.95th=[16581], 00:14:09.482 | 99.99th=[16909] 00:14:09.482 bw ( KiB/s): min=20480, max=24576, per=32.94%, avg=22528.00, stdev=2896.31, samples=2 00:14:09.482 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:14:09.482 lat (msec) : 4=0.37%, 10=7.11%, 20=91.36%, 50=1.16% 00:14:09.482 cpu : usr=6.18%, sys=12.16%, ctx=462, majf=0, minf=1 00:14:09.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:09.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.482 issued rwts: total=5284,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.482 job2: (groupid=0, jobs=1): err= 0: pid=910813: Fri Jul 12 11:50:58 2024 00:14:09.482 read: IOPS=3295, BW=12.9MiB/s (13.5MB/s)(13.5MiB/1047msec) 00:14:09.482 slat (usec): min=3, max=7262, avg=129.85, stdev=687.54 00:14:09.482 clat (usec): min=9177, max=59778, avg=17278.11, stdev=7622.90 00:14:09.482 lat (usec): min=9184, max=62254, avg=17407.96, stdev=7647.78 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[10028], 5.00th=[11994], 10.00th=[13566], 20.00th=[13960], 00:14:09.482 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15270], 60.00th=[16188], 00:14:09.482 | 70.00th=[17171], 80.00th=[18744], 90.00th=[20317], 95.00th=[23200], 00:14:09.482 | 99.00th=[55837], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:14:09.482 | 99.99th=[60031] 00:14:09.482 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:14:09.482 slat (usec): min=4, max=7997, avg=145.06, stdev=531.03 00:14:09.482 clat (usec): min=8488, max=33661, avg=20269.07, stdev=6129.07 00:14:09.482 lat (usec): min=8497, max=33687, avg=20414.13, stdev=6169.27 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[11076], 5.00th=[13304], 10.00th=[13566], 20.00th=[13829], 00:14:09.482 | 30.00th=[14222], 40.00th=[15664], 50.00th=[22152], 60.00th=[23200], 00:14:09.482 | 70.00th=[23462], 80.00th=[24249], 90.00th=[28443], 95.00th=[32637], 00:14:09.482 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:14:09.482 | 99.99th=[33817] 00:14:09.482 bw ( KiB/s): min=12288, max=16384, per=20.96%, avg=14336.00, stdev=2896.31, samples=2 00:14:09.482 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:14:09.482 lat (msec) : 10=0.82%, 20=65.45%, 50=32.43%, 100=1.29% 00:14:09.482 cpu : usr=3.92%, sys=7.84%, ctx=505, majf=0, minf=1 00:14:09.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:09.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.482 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.482 job3: (groupid=0, jobs=1): err= 0: pid=910814: Fri Jul 12 11:50:58 2024 00:14:09.482 read: IOPS=4675, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1009msec) 00:14:09.482 slat (usec): min=2, max=12896, avg=107.23, stdev=733.71 00:14:09.482 clat (usec): min=4682, max=27153, avg=13758.81, stdev=3407.58 00:14:09.482 lat (usec): min=4690, max=27167, avg=13866.03, stdev=3452.40 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[ 6325], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11600], 00:14:09.482 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12649], 60.00th=[13960], 00:14:09.482 | 70.00th=[14222], 80.00th=[15270], 90.00th=[18744], 95.00th=[21365], 00:14:09.482 | 99.00th=[24249], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:14:09.482 | 99.99th=[27132] 00:14:09.482 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:14:09.482 slat (usec): min=4, max=11303, avg=86.95, stdev=508.79 00:14:09.482 clat (usec): min=2736, max=26663, avg=12329.16, stdev=2503.39 00:14:09.482 lat (usec): min=2744, max=26672, avg=12416.11, stdev=2553.33 00:14:09.482 clat percentiles (usec): 00:14:09.482 | 1.00th=[ 4948], 5.00th=[ 6521], 10.00th=[ 8586], 20.00th=[11469], 00:14:09.482 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:14:09.482 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14484], 95.00th=[15008], 00:14:09.482 | 99.00th=[16581], 99.50th=[21103], 99.90th=[24773], 99.95th=[26346], 00:14:09.482 | 99.99th=[26608] 00:14:09.482 bw ( KiB/s): min=20360, max=20464, per=29.84%, avg=20412.00, stdev=73.54, samples=2 00:14:09.482 iops : min= 5090, max= 5116, avg=5103.00, stdev=18.38, samples=2 00:14:09.482 lat (msec) : 4=0.28%, 10=9.49%, 20=86.35%, 50=3.87% 00:14:09.482 cpu : usr=8.04%, sys=7.84%, ctx=556, majf=0, minf=1 00:14:09.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:09.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.482 issued rwts: total=4718,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.482 00:14:09.482 Run status group 0 (all jobs): 00:14:09.482 READ: bw=61.7MiB/s (64.7MB/s), 11.6MiB/s-20.6MiB/s (12.1MB/s-21.6MB/s), io=64.7MiB (67.8MB), run=1004-1048msec 00:14:09.482 WRITE: bw=66.8MiB/s (70.0MB/s), 13.4MiB/s-21.9MiB/s (14.0MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1004-1048msec 00:14:09.482 00:14:09.482 Disk stats (read/write): 00:14:09.482 nvme0n1: ios=2962/3072, merge=0/0, ticks=16206/14107, in_queue=30313, util=87.07% 00:14:09.482 nvme0n2: ios=4532/4608, merge=0/0, ticks=18377/15554, in_queue=33931, util=96.54% 00:14:09.482 nvme0n3: ios=2970/3072, merge=0/0, ticks=23709/29196, in_queue=52905, util=100.00% 00:14:09.482 nvme0n4: ios=4119/4096, merge=0/0, ticks=54974/49537, in_queue=104511, util=96.52% 00:14:09.482 11:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:09.482 11:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=910946 00:14:09.482 11:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:09.482 11:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:09.482 [global] 00:14:09.482 thread=1 00:14:09.482 invalidate=1 00:14:09.482 rw=read 00:14:09.482 time_based=1 00:14:09.482 runtime=10 00:14:09.482 ioengine=libaio 00:14:09.482 direct=1 00:14:09.482 bs=4096 00:14:09.482 iodepth=1 00:14:09.482 norandommap=1 00:14:09.482 numjobs=1 00:14:09.482 00:14:09.482 [job0] 00:14:09.482 filename=/dev/nvme0n1 00:14:09.482 [job1] 00:14:09.482 filename=/dev/nvme0n2 00:14:09.482 [job2] 00:14:09.482 filename=/dev/nvme0n3 00:14:09.482 [job3] 00:14:09.482 filename=/dev/nvme0n4 00:14:09.482 Could not set queue depth (nvme0n1) 00:14:09.482 Could not set queue depth (nvme0n2) 00:14:09.482 Could not set queue depth (nvme0n3) 00:14:09.482 Could not set queue depth (nvme0n4) 00:14:09.482 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.482 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.482 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.482 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.482 fio-3.35 00:14:09.482 Starting 4 threads 00:14:12.775 11:51:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:12.775 11:51:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:12.775 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=22265856, buflen=4096 00:14:12.775 fio: pid=911123, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:12.775 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:12.775 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:12.775 fio: io_u error on file /dev/nvme0n3: Input/output error: read offset=8695808, buflen=4096 00:14:12.775 fio: pid=911111, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:13.034 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.034 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:13.034 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13889536, buflen=4096 00:14:13.034 fio: pid=911058, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:13.293 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=35799040, buflen=4096 00:14:13.293 fio: pid=911077, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:13.293 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.293 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:13.293 00:14:13.293 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=911058: Fri Jul 12 11:51:02 2024 00:14:13.293 read: IOPS=989, BW=3957KiB/s (4052kB/s)(13.2MiB/3428msec) 00:14:13.293 slat (usec): min=5, max=13978, avg=18.89, stdev=354.59 00:14:13.293 clat (usec): min=179, max=42080, avg=982.88, stdev=5411.18 00:14:13.293 lat (usec): min=185, max=42089, avg=997.65, stdev=5417.77 00:14:13.293 clat percentiles (usec): 00:14:13.293 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:14:13.293 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 247], 00:14:13.293 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 363], 00:14:13.293 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:14:13.293 | 99.99th=[42206] 00:14:13.293 bw ( KiB/s): min= 136, max=14576, per=16.06%, avg=3438.67, stdev=5637.52, samples=6 00:14:13.293 iops : min= 34, max= 3644, avg=859.67, stdev=1409.38, samples=6 00:14:13.293 lat (usec) : 250=64.89%, 500=32.72%, 750=0.38%, 1000=0.06% 00:14:13.293 lat (msec) : 2=0.06%, 10=0.03%, 50=1.83% 00:14:13.293 cpu : usr=0.53%, sys=1.34%, ctx=3395, majf=0, minf=1 00:14:13.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.293 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.293 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.293 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=911077: Fri Jul 12 11:51:02 2024 00:14:13.293 read: IOPS=2375, BW=9500KiB/s (9728kB/s)(34.1MiB/3680msec) 00:14:13.293 slat (usec): min=4, max=15053, avg=16.02, stdev=314.91 00:14:13.293 clat (usec): min=175, max=53537, avg=400.36, stdev=2539.73 00:14:13.293 lat (usec): min=181, max=53544, avg=416.38, stdev=2559.43 00:14:13.293 clat percentiles (usec): 00:14:13.293 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:14:13.293 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 235], 00:14:13.293 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 351], 00:14:13.293 | 99.00th=[ 510], 99.50th=[ 906], 99.90th=[42206], 99.95th=[42206], 00:14:13.293 | 99.99th=[53740] 00:14:13.293 bw ( KiB/s): min= 104, max=17840, per=44.20%, avg=9459.43, stdev=6999.32, samples=7 00:14:13.293 iops : min= 26, max= 4460, avg=2364.86, stdev=1749.83, samples=7 00:14:13.293 lat (usec) : 250=70.48%, 500=28.44%, 750=0.53%, 1000=0.06% 00:14:13.293 lat (msec) : 2=0.09%, 4=0.01%, 50=0.37%, 100=0.01% 00:14:13.293 cpu : usr=1.22%, sys=3.04%, ctx=8749, majf=0, minf=1 00:14:13.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 issued rwts: total=8741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.294 job2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=911111: Fri Jul 12 11:51:02 2024 00:14:13.294 read: IOPS=671, BW=2685KiB/s (2749kB/s)(8492KiB/3163msec) 00:14:13.294 slat (usec): min=4, max=6822, avg=15.70, stdev=148.00 00:14:13.294 clat (usec): min=186, max=42262, avg=1470.96, stdev=6805.42 00:14:13.294 lat (usec): min=191, max=42270, avg=1483.45, stdev=6806.19 00:14:13.294 clat percentiles (usec): 00:14:13.294 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 225], 00:14:13.294 | 30.00th=[ 239], 40.00th=[ 262], 50.00th=[ 302], 60.00th=[ 330], 00:14:13.294 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 445], 95.00th=[ 515], 00:14:13.294 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:13.294 | 99.99th=[42206] 00:14:13.294 bw ( KiB/s): min= 96, max=10264, per=13.20%, avg=2825.33, stdev=4372.10, samples=6 00:14:13.294 iops : min= 24, max= 2566, avg=706.33, stdev=1093.03, samples=6 00:14:13.294 lat (usec) : 250=35.45%, 500=58.80%, 750=2.68%, 1000=0.14% 00:14:13.294 lat (msec) : 10=0.05%, 50=2.82% 00:14:13.294 cpu : usr=0.41%, sys=1.08%, ctx=2124, majf=0, minf=1 00:14:13.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.294 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=911123: Fri Jul 12 11:51:02 2024 00:14:13.294 read: IOPS=1881, BW=7526KiB/s (7707kB/s)(21.2MiB/2889msec) 00:14:13.294 slat (nsec): min=4591, max=47423, avg=9771.37, stdev=5078.48 00:14:13.294 clat (usec): min=192, max=41188, avg=515.28, stdev=3303.44 00:14:13.294 lat (usec): min=204, max=41199, avg=525.06, stdev=3303.80 00:14:13.294 clat percentiles (usec): 00:14:13.294 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:14:13.294 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:14:13.294 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 281], 00:14:13.294 | 99.00th=[ 457], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:13.294 | 99.99th=[41157] 00:14:13.294 bw ( KiB/s): min= 96, max=15304, per=30.60%, avg=6548.80, stdev=7680.59, samples=5 00:14:13.294 iops : min= 24, max= 3826, avg=1637.20, stdev=1920.15, samples=5 00:14:13.294 lat (usec) : 250=66.93%, 500=32.30%, 750=0.06% 00:14:13.294 lat (msec) : 2=0.04%, 50=0.66% 00:14:13.294 cpu : usr=1.11%, sys=2.11%, ctx=5438, majf=0, minf=1 00:14:13.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.294 issued rwts: total=5437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.294 00:14:13.294 Run status group 0 (all jobs): 00:14:13.294 READ: bw=20.9MiB/s (21.9MB/s), 2685KiB/s-9500KiB/s (2749kB/s-9728kB/s), io=76.9MiB (80.7MB), run=2889-3680msec 00:14:13.294 00:14:13.294 Disk stats (read/write): 00:14:13.294 nvme0n1: ios=3422/0, merge=0/0, ticks=3232/0, in_queue=3232, util=95.45% 00:14:13.294 nvme0n2: ios=8478/0, merge=0/0, ticks=3376/0, in_queue=3376, util=94.61% 00:14:13.294 nvme0n3: ios=2152/0, merge=0/0, ticks=3303/0, in_queue=3303, util=99.53% 00:14:13.294 nvme0n4: ios=5347/0, merge=0/0, ticks=2725/0, in_queue=2725, util=96.71% 00:14:13.551 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.551 11:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:13.809 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.809 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:14.067 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.067 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:14.324 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.324 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:14.583 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:14.583 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 910946 00:14:14.583 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:14.583 11:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:14.842 nvmf hotplug test: fio failed as expected 00:14:14.842 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.100 rmmod nvme_tcp 00:14:15.100 rmmod nvme_fabrics 00:14:15.100 rmmod nvme_keyring 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 908909 ']' 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 908909 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 908909 ']' 00:14:15.100 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 908909 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 908909 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 908909' 00:14:15.101 killing process with pid 908909 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 908909 00:14:15.101 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 908909 00:14:15.360 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.360 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.360 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.361 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.361 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.361 11:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.361 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.361 11:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.903 11:51:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.903 00:14:17.903 real 0m23.911s 00:14:17.903 user 1m24.830s 00:14:17.903 sys 0m6.421s 00:14:17.903 11:51:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:17.903 11:51:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.903 ************************************ 00:14:17.903 END TEST nvmf_fio_target 00:14:17.903 ************************************ 00:14:17.903 11:51:06 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.903 11:51:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:17.903 11:51:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:17.903 11:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.903 ************************************ 00:14:17.903 START TEST nvmf_bdevio 00:14:17.903 ************************************ 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.903 * Looking for test storage... 00:14:17.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.903 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.904 11:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:19.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:19.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:19.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:19.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.809 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.810 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.810 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.810 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.810 11:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:14:19.810 00:14:19.810 --- 10.0.0.2 ping statistics --- 00:14:19.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.810 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:14:19.810 00:14:19.810 --- 10.0.0.1 ping statistics --- 00:14:19.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.810 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=914333 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 914333 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 914333 ']' 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:19.810 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.810 [2024-07-12 11:51:09.143042] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:14:19.810 [2024-07-12 11:51:09.143126] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.810 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.810 [2024-07-12 11:51:09.209334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.069 [2024-07-12 11:51:09.320763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.069 [2024-07-12 11:51:09.320805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.069 [2024-07-12 11:51:09.320834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.069 [2024-07-12 11:51:09.320846] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.069 [2024-07-12 11:51:09.320856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.069 [2024-07-12 11:51:09.320994] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:14:20.069 [2024-07-12 11:51:09.321048] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:14:20.069 [2024-07-12 11:51:09.321099] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:14:20.069 [2024-07-12 11:51:09.321102] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 [2024-07-12 11:51:09.463522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 Malloc0 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.069 [2024-07-12 11:51:09.516679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:20.069 { 00:14:20.069 "params": { 00:14:20.069 "name": "Nvme$subsystem", 00:14:20.069 "trtype": "$TEST_TRANSPORT", 00:14:20.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.069 "adrfam": "ipv4", 00:14:20.069 "trsvcid": "$NVMF_PORT", 00:14:20.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.069 "hdgst": ${hdgst:-false}, 00:14:20.069 "ddgst": ${ddgst:-false} 00:14:20.069 }, 00:14:20.069 "method": "bdev_nvme_attach_controller" 00:14:20.069 } 00:14:20.069 EOF 00:14:20.069 )") 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:20.069 11:51:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:20.069 "params": { 00:14:20.069 "name": "Nvme1", 00:14:20.069 "trtype": "tcp", 00:14:20.069 "traddr": "10.0.0.2", 00:14:20.069 "adrfam": "ipv4", 00:14:20.069 "trsvcid": "4420", 00:14:20.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.069 "hdgst": false, 00:14:20.069 "ddgst": false 00:14:20.069 }, 00:14:20.069 "method": "bdev_nvme_attach_controller" 00:14:20.069 }' 00:14:20.328 [2024-07-12 11:51:09.561720] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:14:20.328 [2024-07-12 11:51:09.561804] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914429 ] 00:14:20.328 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.328 [2024-07-12 11:51:09.634439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.328 [2024-07-12 11:51:09.784288] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.328 [2024-07-12 11:51:09.784344] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.328 [2024-07-12 11:51:09.784350] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.587 I/O targets: 00:14:20.587 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:20.587 00:14:20.587 00:14:20.587 CUnit - A unit testing framework for C - Version 2.1-3 00:14:20.587 http://cunit.sourceforge.net/ 00:14:20.587 00:14:20.587 00:14:20.587 Suite: bdevio tests on: Nvme1n1 00:14:20.587 Test: blockdev write read block ...passed 00:14:20.846 Test: blockdev write zeroes read block ...passed 00:14:20.846 Test: blockdev write zeroes read no split ...passed 00:14:20.846 Test: blockdev write zeroes read split ...passed 00:14:20.846 Test: blockdev write zeroes read split partial ...passed 00:14:20.846 Test: blockdev reset ...[2024-07-12 11:51:10.174715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:20.846 [2024-07-12 11:51:10.174820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4bd40 (9): Bad file descriptor 00:14:20.846 [2024-07-12 11:51:10.268942] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:20.846 passed 00:14:20.846 Test: blockdev write read 8 blocks ...passed 00:14:20.846 Test: blockdev write read size > 128k ...passed 00:14:20.846 Test: blockdev write read invalid size ...passed 00:14:21.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:21.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:21.106 Test: blockdev write read max offset ...passed 00:14:21.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:21.106 Test: blockdev writev readv 8 blocks ...passed 00:14:21.106 Test: blockdev writev readv 30 x 1block ...passed 00:14:21.106 Test: blockdev writev readv block ...passed 00:14:21.106 Test: blockdev writev readv size > 128k ...passed 00:14:21.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:21.106 Test: blockdev comparev and writev ...[2024-07-12 11:51:10.483085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.483121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.483145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.483162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.483540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.483565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.483587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.483603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.483984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.484006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.484022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.484380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.484404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.484426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.106 [2024-07-12 11:51:10.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:21.106 passed 00:14:21.106 Test: blockdev nvme passthru rw ...passed 00:14:21.106 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:51:10.567127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.106 [2024-07-12 11:51:10.567160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.567300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.106 [2024-07-12 11:51:10.567323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.567461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.106 [2024-07-12 11:51:10.567483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:21.106 [2024-07-12 11:51:10.567614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.106 [2024-07-12 11:51:10.567636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:21.106 passed 00:14:21.106 Test: blockdev nvme admin passthru ...passed 00:14:21.366 Test: blockdev copy ...passed 00:14:21.366 00:14:21.366 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.366 suites 1 1 n/a 0 0 00:14:21.366 tests 23 23 23 0 0 00:14:21.366 asserts 152 152 152 0 n/a 00:14:21.366 00:14:21.366 Elapsed time = 1.216 seconds 00:14:21.624 11:51:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.624 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.625 rmmod nvme_tcp 00:14:21.625 rmmod nvme_fabrics 00:14:21.625 rmmod nvme_keyring 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 914333 ']' 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 914333 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 914333 ']' 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 914333 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 914333 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 914333' 00:14:21.625 killing process with pid 914333 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 914333 00:14:21.625 11:51:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 914333 00:14:21.882 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.883 11:51:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.828 11:51:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.828 00:14:23.828 real 0m6.459s 00:14:23.828 user 0m10.731s 00:14:23.828 sys 0m2.073s 00:14:23.828 11:51:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:23.828 11:51:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.828 ************************************ 00:14:23.828 END TEST nvmf_bdevio 00:14:23.828 ************************************ 00:14:24.085 11:51:13 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:24.085 11:51:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:24.085 11:51:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.086 11:51:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.086 ************************************ 00:14:24.086 START TEST nvmf_auth_target 00:14:24.086 ************************************ 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:24.086 * Looking for test storage... 00:14:24.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.086 11:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:25.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:25.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:25.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.989 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:25.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.990 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:26.248 00:14:26.248 --- 10.0.0.2 ping statistics --- 00:14:26.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.248 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:14:26.248 00:14:26.248 --- 10.0.0.1 ping statistics --- 00:14:26.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.248 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=916500 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 916500 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 916500 ']' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:26.248 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.506 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:26.506 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:26.506 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.506 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:26.506 11:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=916635 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b6727865a2a625d76d7ea4c6ee79e9830519223602ac2bc7 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Th9 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b6727865a2a625d76d7ea4c6ee79e9830519223602ac2bc7 0 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b6727865a2a625d76d7ea4c6ee79e9830519223602ac2bc7 0 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b6727865a2a625d76d7ea4c6ee79e9830519223602ac2bc7 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Th9 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Th9 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Th9 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:26.507 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:26.766 11:51:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e42859d78446c1cc15ad89cf155617c40a5d001133de02ebbd3807a44488dde 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6MK 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e42859d78446c1cc15ad89cf155617c40a5d001133de02ebbd3807a44488dde 3 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e42859d78446c1cc15ad89cf155617c40a5d001133de02ebbd3807a44488dde 3 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e42859d78446c1cc15ad89cf155617c40a5d001133de02ebbd3807a44488dde 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6MK 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6MK 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.6MK 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0b96192c8cf84240ffba3d0d0fc816e 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.f6n 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0b96192c8cf84240ffba3d0d0fc816e 1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0b96192c8cf84240ffba3d0d0fc816e 1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0b96192c8cf84240ffba3d0d0fc816e 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.f6n 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.f6n 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.f6n 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=928083996341d96a496c64bac983a3654636012017693ac1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Klr 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 928083996341d96a496c64bac983a3654636012017693ac1 2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 928083996341d96a496c64bac983a3654636012017693ac1 2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=928083996341d96a496c64bac983a3654636012017693ac1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Klr 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Klr 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Klr 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5602b8044852a1b4ab34334f71b7011d8b69749624c2021a 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wfc 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5602b8044852a1b4ab34334f71b7011d8b69749624c2021a 2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5602b8044852a1b4ab34334f71b7011d8b69749624c2021a 2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5602b8044852a1b4ab34334f71b7011d8b69749624c2021a 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wfc 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wfc 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.wfc 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=683108fcc79261b11d7abc0f115bad3f 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.662 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 683108fcc79261b11d7abc0f115bad3f 1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 683108fcc79261b11d7abc0f115bad3f 1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=683108fcc79261b11d7abc0f115bad3f 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.662 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.662 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.662 00:14:26.766 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=339e7928f2311a14ae09a6430403ef75734c77e255f09eed57ba054e39f3d1df 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wj3 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 339e7928f2311a14ae09a6430403ef75734c77e255f09eed57ba054e39f3d1df 3 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 339e7928f2311a14ae09a6430403ef75734c77e255f09eed57ba054e39f3d1df 3 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=339e7928f2311a14ae09a6430403ef75734c77e255f09eed57ba054e39f3d1df 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:26.767 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wj3 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wj3 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.wj3 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 916500 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 916500 ']' 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:27.026 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 916635 /var/tmp/host.sock 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 916635 ']' 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:27.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.285 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Th9 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Th9 00:14:27.543 11:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Th9 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.6MK ]] 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6MK 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6MK 00:14:27.800 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6MK 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.f6n 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.056 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.f6n 00:14:28.057 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.f6n 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Klr ]] 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Klr 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Klr 00:14:28.313 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Klr 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wfc 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wfc 00:14:28.570 11:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wfc 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.662 ]] 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.662 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.662 00:14:28.829 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.662 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wj3 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wj3 00:14:29.085 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wj3 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:29.344 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.602 11:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.860 00:14:29.860 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.860 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.860 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.118 { 00:14:30.118 "cntlid": 1, 00:14:30.118 "qid": 0, 00:14:30.118 "state": "enabled", 00:14:30.118 "listen_address": { 00:14:30.118 "trtype": "TCP", 00:14:30.118 "adrfam": "IPv4", 00:14:30.118 "traddr": "10.0.0.2", 00:14:30.118 "trsvcid": "4420" 00:14:30.118 }, 00:14:30.118 "peer_address": { 00:14:30.118 "trtype": "TCP", 00:14:30.118 "adrfam": "IPv4", 00:14:30.118 "traddr": "10.0.0.1", 00:14:30.118 "trsvcid": "60444" 00:14:30.118 }, 00:14:30.118 "auth": { 00:14:30.118 "state": "completed", 00:14:30.118 "digest": "sha256", 00:14:30.118 "dhgroup": "null" 00:14:30.118 } 00:14:30.118 } 00:14:30.118 ]' 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.118 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.377 11:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:31.314 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.572 11:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.572 11:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.572 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.572 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.140 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.140 11:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.397 11:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.397 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.397 { 00:14:32.397 "cntlid": 3, 00:14:32.397 "qid": 0, 00:14:32.397 "state": "enabled", 00:14:32.397 "listen_address": { 00:14:32.397 "trtype": "TCP", 00:14:32.397 "adrfam": "IPv4", 00:14:32.397 "traddr": "10.0.0.2", 00:14:32.397 "trsvcid": "4420" 00:14:32.397 }, 00:14:32.397 "peer_address": { 00:14:32.397 "trtype": "TCP", 00:14:32.397 "adrfam": "IPv4", 00:14:32.397 "traddr": "10.0.0.1", 00:14:32.397 "trsvcid": "60468" 00:14:32.398 }, 00:14:32.398 "auth": { 00:14:32.398 "state": "completed", 00:14:32.398 "digest": "sha256", 00:14:32.398 "dhgroup": "null" 00:14:32.398 } 00:14:32.398 } 00:14:32.398 ]' 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.398 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.655 11:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.592 11:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.851 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.110 00:14:34.110 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.110 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.110 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.368 { 00:14:34.368 "cntlid": 5, 00:14:34.368 "qid": 0, 00:14:34.368 "state": "enabled", 00:14:34.368 "listen_address": { 00:14:34.368 "trtype": "TCP", 00:14:34.368 "adrfam": "IPv4", 00:14:34.368 "traddr": "10.0.0.2", 00:14:34.368 "trsvcid": "4420" 00:14:34.368 }, 00:14:34.368 "peer_address": { 00:14:34.368 "trtype": "TCP", 00:14:34.368 "adrfam": "IPv4", 00:14:34.368 "traddr": "10.0.0.1", 00:14:34.368 "trsvcid": "60500" 00:14:34.368 }, 00:14:34.368 "auth": { 00:14:34.368 "state": "completed", 00:14:34.368 "digest": "sha256", 00:14:34.368 "dhgroup": "null" 00:14:34.368 } 00:14:34.368 } 00:14:34.368 ]' 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.368 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.626 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:34.626 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.626 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.626 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.626 11:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.885 11:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.820 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.078 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.335 00:14:36.335 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.335 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.335 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.594 { 00:14:36.594 "cntlid": 7, 00:14:36.594 "qid": 0, 00:14:36.594 "state": "enabled", 00:14:36.594 "listen_address": { 00:14:36.594 "trtype": "TCP", 00:14:36.594 "adrfam": "IPv4", 00:14:36.594 "traddr": "10.0.0.2", 00:14:36.594 "trsvcid": "4420" 00:14:36.594 }, 00:14:36.594 "peer_address": { 00:14:36.594 "trtype": "TCP", 00:14:36.594 "adrfam": "IPv4", 00:14:36.594 "traddr": "10.0.0.1", 00:14:36.594 "trsvcid": "60538" 00:14:36.594 }, 00:14:36.594 "auth": { 00:14:36.594 "state": "completed", 00:14:36.594 "digest": "sha256", 00:14:36.594 "dhgroup": "null" 00:14:36.594 } 00:14:36.594 } 00:14:36.594 ]' 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.594 11:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.594 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.594 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.594 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.594 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.594 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.853 11:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.791 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.049 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.615 00:14:38.615 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.615 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.615 11:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.615 { 00:14:38.615 "cntlid": 9, 00:14:38.615 "qid": 0, 00:14:38.615 "state": "enabled", 00:14:38.615 "listen_address": { 00:14:38.615 "trtype": "TCP", 00:14:38.615 "adrfam": "IPv4", 00:14:38.615 "traddr": "10.0.0.2", 00:14:38.615 "trsvcid": "4420" 00:14:38.615 }, 00:14:38.615 "peer_address": { 00:14:38.615 "trtype": "TCP", 00:14:38.615 "adrfam": "IPv4", 00:14:38.615 "traddr": "10.0.0.1", 00:14:38.615 "trsvcid": "55048" 00:14:38.615 }, 00:14:38.615 "auth": { 00:14:38.615 "state": "completed", 00:14:38.615 "digest": "sha256", 00:14:38.615 "dhgroup": "ffdhe2048" 00:14:38.615 } 00:14:38.615 } 00:14:38.615 ]' 00:14:38.615 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.874 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.158 11:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.106 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.364 11:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.623 00:14:40.623 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.623 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.623 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.881 { 00:14:40.881 "cntlid": 11, 00:14:40.881 "qid": 0, 00:14:40.881 "state": "enabled", 00:14:40.881 "listen_address": { 00:14:40.881 "trtype": "TCP", 00:14:40.881 "adrfam": "IPv4", 00:14:40.881 "traddr": "10.0.0.2", 00:14:40.881 "trsvcid": "4420" 00:14:40.881 }, 00:14:40.881 "peer_address": { 00:14:40.881 "trtype": "TCP", 00:14:40.881 "adrfam": "IPv4", 00:14:40.881 "traddr": "10.0.0.1", 00:14:40.881 "trsvcid": "55064" 00:14:40.881 }, 00:14:40.881 "auth": { 00:14:40.881 "state": "completed", 00:14:40.881 "digest": "sha256", 00:14:40.881 "dhgroup": "ffdhe2048" 00:14:40.881 } 00:14:40.881 } 00:14:40.881 ]' 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.881 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.139 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.139 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.139 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.139 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.139 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.397 11:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.334 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.591 11:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.848 00:14:42.848 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.848 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.848 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.104 { 00:14:43.104 "cntlid": 13, 00:14:43.104 "qid": 0, 00:14:43.104 "state": "enabled", 00:14:43.104 "listen_address": { 00:14:43.104 "trtype": "TCP", 00:14:43.104 "adrfam": "IPv4", 00:14:43.104 "traddr": "10.0.0.2", 00:14:43.104 "trsvcid": "4420" 00:14:43.104 }, 00:14:43.104 "peer_address": { 00:14:43.104 "trtype": "TCP", 00:14:43.104 "adrfam": "IPv4", 00:14:43.104 "traddr": "10.0.0.1", 00:14:43.104 "trsvcid": "55102" 00:14:43.104 }, 00:14:43.104 "auth": { 00:14:43.104 "state": "completed", 00:14:43.104 "digest": "sha256", 00:14:43.104 "dhgroup": "ffdhe2048" 00:14:43.104 } 00:14:43.104 } 00:14:43.104 ]' 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.104 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.362 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.362 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.362 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.620 11:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.554 11:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.812 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.070 00:14:45.070 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.070 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.070 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.328 { 00:14:45.328 "cntlid": 15, 00:14:45.328 "qid": 0, 00:14:45.328 "state": "enabled", 00:14:45.328 "listen_address": { 00:14:45.328 "trtype": "TCP", 00:14:45.328 "adrfam": "IPv4", 00:14:45.328 "traddr": "10.0.0.2", 00:14:45.328 "trsvcid": "4420" 00:14:45.328 }, 00:14:45.328 "peer_address": { 00:14:45.328 "trtype": "TCP", 00:14:45.328 "adrfam": "IPv4", 00:14:45.328 "traddr": "10.0.0.1", 00:14:45.328 "trsvcid": "55126" 00:14:45.328 }, 00:14:45.328 "auth": { 00:14:45.328 "state": "completed", 00:14:45.328 "digest": "sha256", 00:14:45.328 "dhgroup": "ffdhe2048" 00:14:45.328 } 00:14:45.328 } 00:14:45.328 ]' 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.328 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.586 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.586 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.586 11:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.845 11:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.783 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.041 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.299 00:14:47.299 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.299 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.299 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.556 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.556 { 00:14:47.556 "cntlid": 17, 00:14:47.556 "qid": 0, 00:14:47.556 "state": "enabled", 00:14:47.556 "listen_address": { 00:14:47.556 "trtype": "TCP", 00:14:47.556 "adrfam": "IPv4", 00:14:47.556 "traddr": "10.0.0.2", 00:14:47.556 "trsvcid": "4420" 00:14:47.556 }, 00:14:47.556 "peer_address": { 00:14:47.556 "trtype": "TCP", 00:14:47.556 "adrfam": "IPv4", 00:14:47.556 "traddr": "10.0.0.1", 00:14:47.556 "trsvcid": "55154" 00:14:47.556 }, 00:14:47.556 "auth": { 00:14:47.556 "state": "completed", 00:14:47.556 "digest": "sha256", 00:14:47.556 "dhgroup": "ffdhe3072" 00:14:47.556 } 00:14:47.557 } 00:14:47.557 ]' 00:14:47.557 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.557 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.557 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.557 11:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:47.557 11:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.557 11:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.557 11:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.557 11:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.815 11:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:48.750 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.009 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.578 00:14:49.578 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.578 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.578 11:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.836 { 00:14:49.836 "cntlid": 19, 00:14:49.836 "qid": 0, 00:14:49.836 "state": "enabled", 00:14:49.836 "listen_address": { 00:14:49.836 "trtype": "TCP", 00:14:49.836 "adrfam": "IPv4", 00:14:49.836 "traddr": "10.0.0.2", 00:14:49.836 "trsvcid": "4420" 00:14:49.836 }, 00:14:49.836 "peer_address": { 00:14:49.836 "trtype": "TCP", 00:14:49.836 "adrfam": "IPv4", 00:14:49.836 "traddr": "10.0.0.1", 00:14:49.836 "trsvcid": "49600" 00:14:49.836 }, 00:14:49.836 "auth": { 00:14:49.836 "state": "completed", 00:14:49.836 "digest": "sha256", 00:14:49.836 "dhgroup": "ffdhe3072" 00:14:49.836 } 00:14:49.836 } 00:14:49.836 ]' 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.836 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.094 11:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.030 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.288 11:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.853 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.853 { 00:14:51.853 "cntlid": 21, 00:14:51.853 "qid": 0, 00:14:51.853 "state": "enabled", 00:14:51.853 "listen_address": { 00:14:51.853 "trtype": "TCP", 00:14:51.853 "adrfam": "IPv4", 00:14:51.853 "traddr": "10.0.0.2", 00:14:51.853 "trsvcid": "4420" 00:14:51.853 }, 00:14:51.853 "peer_address": { 00:14:51.853 "trtype": "TCP", 00:14:51.853 "adrfam": "IPv4", 00:14:51.853 "traddr": "10.0.0.1", 00:14:51.853 "trsvcid": "49632" 00:14:51.853 }, 00:14:51.853 "auth": { 00:14:51.853 "state": "completed", 00:14:51.853 "digest": "sha256", 00:14:51.853 "dhgroup": "ffdhe3072" 00:14:51.853 } 00:14:51.853 } 00:14:51.853 ]' 00:14:51.853 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.110 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.368 11:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.303 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.561 11:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.818 00:14:53.818 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.818 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.818 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.076 { 00:14:54.076 "cntlid": 23, 00:14:54.076 "qid": 0, 00:14:54.076 "state": "enabled", 00:14:54.076 "listen_address": { 00:14:54.076 "trtype": "TCP", 00:14:54.076 "adrfam": "IPv4", 00:14:54.076 "traddr": "10.0.0.2", 00:14:54.076 "trsvcid": "4420" 00:14:54.076 }, 00:14:54.076 "peer_address": { 00:14:54.076 "trtype": "TCP", 00:14:54.076 "adrfam": "IPv4", 00:14:54.076 "traddr": "10.0.0.1", 00:14:54.076 "trsvcid": "49658" 00:14:54.076 }, 00:14:54.076 "auth": { 00:14:54.076 "state": "completed", 00:14:54.076 "digest": "sha256", 00:14:54.076 "dhgroup": "ffdhe3072" 00:14:54.076 } 00:14:54.076 } 00:14:54.076 ]' 00:14:54.076 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.334 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.591 11:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:55.564 11:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.821 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.080 00:14:56.340 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.340 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.340 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.599 { 00:14:56.599 "cntlid": 25, 00:14:56.599 "qid": 0, 00:14:56.599 "state": "enabled", 00:14:56.599 "listen_address": { 00:14:56.599 "trtype": "TCP", 00:14:56.599 "adrfam": "IPv4", 00:14:56.599 "traddr": "10.0.0.2", 00:14:56.599 "trsvcid": "4420" 00:14:56.599 }, 00:14:56.599 "peer_address": { 00:14:56.599 "trtype": "TCP", 00:14:56.599 "adrfam": "IPv4", 00:14:56.599 "traddr": "10.0.0.1", 00:14:56.599 "trsvcid": "49676" 00:14:56.599 }, 00:14:56.599 "auth": { 00:14:56.599 "state": "completed", 00:14:56.599 "digest": "sha256", 00:14:56.599 "dhgroup": "ffdhe4096" 00:14:56.599 } 00:14:56.599 } 00:14:56.599 ]' 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.599 11:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.857 11:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.793 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.051 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.620 00:14:58.620 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.620 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.620 11:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.620 { 00:14:58.620 "cntlid": 27, 00:14:58.620 "qid": 0, 00:14:58.620 "state": "enabled", 00:14:58.620 "listen_address": { 00:14:58.620 "trtype": "TCP", 00:14:58.620 "adrfam": "IPv4", 00:14:58.620 "traddr": "10.0.0.2", 00:14:58.620 "trsvcid": "4420" 00:14:58.620 }, 00:14:58.620 "peer_address": { 00:14:58.620 "trtype": "TCP", 00:14:58.620 "adrfam": "IPv4", 00:14:58.620 "traddr": "10.0.0.1", 00:14:58.620 "trsvcid": "47972" 00:14:58.620 }, 00:14:58.620 "auth": { 00:14:58.620 "state": "completed", 00:14:58.620 "digest": "sha256", 00:14:58.620 "dhgroup": "ffdhe4096" 00:14:58.620 } 00:14:58.620 } 00:14:58.620 ]' 00:14:58.620 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.878 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.136 11:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.073 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.331 11:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.897 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.897 { 00:15:00.897 "cntlid": 29, 00:15:00.897 "qid": 0, 00:15:00.897 "state": "enabled", 00:15:00.897 "listen_address": { 00:15:00.897 "trtype": "TCP", 00:15:00.897 "adrfam": "IPv4", 00:15:00.897 "traddr": "10.0.0.2", 00:15:00.897 "trsvcid": "4420" 00:15:00.897 }, 00:15:00.897 "peer_address": { 00:15:00.897 "trtype": "TCP", 00:15:00.897 "adrfam": "IPv4", 00:15:00.897 "traddr": "10.0.0.1", 00:15:00.897 "trsvcid": "48008" 00:15:00.897 }, 00:15:00.897 "auth": { 00:15:00.897 "state": "completed", 00:15:00.897 "digest": "sha256", 00:15:00.897 "dhgroup": "ffdhe4096" 00:15:00.897 } 00:15:00.897 } 00:15:00.897 ]' 00:15:00.897 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.156 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.414 11:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.350 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.608 11:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.866 00:15:02.866 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.866 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.866 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.124 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.124 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.124 11:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.124 11:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.124 11:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.125 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.125 { 00:15:03.125 "cntlid": 31, 00:15:03.125 "qid": 0, 00:15:03.125 "state": "enabled", 00:15:03.125 "listen_address": { 00:15:03.125 "trtype": "TCP", 00:15:03.125 "adrfam": "IPv4", 00:15:03.125 "traddr": "10.0.0.2", 00:15:03.125 "trsvcid": "4420" 00:15:03.125 }, 00:15:03.125 "peer_address": { 00:15:03.125 "trtype": "TCP", 00:15:03.125 "adrfam": "IPv4", 00:15:03.125 "traddr": "10.0.0.1", 00:15:03.125 "trsvcid": "48044" 00:15:03.125 }, 00:15:03.125 "auth": { 00:15:03.125 "state": "completed", 00:15:03.125 "digest": "sha256", 00:15:03.125 "dhgroup": "ffdhe4096" 00:15:03.125 } 00:15:03.125 } 00:15:03.125 ]' 00:15:03.125 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.383 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.642 11:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:04.574 11:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.832 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.398 00:15:05.398 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.398 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.398 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.656 { 00:15:05.656 "cntlid": 33, 00:15:05.656 "qid": 0, 00:15:05.656 "state": "enabled", 00:15:05.656 "listen_address": { 00:15:05.656 "trtype": "TCP", 00:15:05.656 "adrfam": "IPv4", 00:15:05.656 "traddr": "10.0.0.2", 00:15:05.656 "trsvcid": "4420" 00:15:05.656 }, 00:15:05.656 "peer_address": { 00:15:05.656 "trtype": "TCP", 00:15:05.656 "adrfam": "IPv4", 00:15:05.656 "traddr": "10.0.0.1", 00:15:05.656 "trsvcid": "48070" 00:15:05.656 }, 00:15:05.656 "auth": { 00:15:05.656 "state": "completed", 00:15:05.656 "digest": "sha256", 00:15:05.656 "dhgroup": "ffdhe6144" 00:15:05.656 } 00:15:05.656 } 00:15:05.656 ]' 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.656 11:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.656 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.913 11:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.873 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.129 11:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.690 00:15:07.690 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.690 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.690 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.972 { 00:15:07.972 "cntlid": 35, 00:15:07.972 "qid": 0, 00:15:07.972 "state": "enabled", 00:15:07.972 "listen_address": { 00:15:07.972 "trtype": "TCP", 00:15:07.972 "adrfam": "IPv4", 00:15:07.972 "traddr": "10.0.0.2", 00:15:07.972 "trsvcid": "4420" 00:15:07.972 }, 00:15:07.972 "peer_address": { 00:15:07.972 "trtype": "TCP", 00:15:07.972 "adrfam": "IPv4", 00:15:07.972 "traddr": "10.0.0.1", 00:15:07.972 "trsvcid": "48108" 00:15:07.972 }, 00:15:07.972 "auth": { 00:15:07.972 "state": "completed", 00:15:07.972 "digest": "sha256", 00:15:07.972 "dhgroup": "ffdhe6144" 00:15:07.972 } 00:15:07.972 } 00:15:07.972 ]' 00:15:07.972 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.228 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.485 11:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.417 11:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.675 11:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.932 11:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.932 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.932 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.496 00:15:10.496 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.496 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.496 11:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.753 { 00:15:10.753 "cntlid": 37, 00:15:10.753 "qid": 0, 00:15:10.753 "state": "enabled", 00:15:10.753 "listen_address": { 00:15:10.753 "trtype": "TCP", 00:15:10.753 "adrfam": "IPv4", 00:15:10.753 "traddr": "10.0.0.2", 00:15:10.753 "trsvcid": "4420" 00:15:10.753 }, 00:15:10.753 "peer_address": { 00:15:10.753 "trtype": "TCP", 00:15:10.753 "adrfam": "IPv4", 00:15:10.753 "traddr": "10.0.0.1", 00:15:10.753 "trsvcid": "45544" 00:15:10.753 }, 00:15:10.753 "auth": { 00:15:10.753 "state": "completed", 00:15:10.753 "digest": "sha256", 00:15:10.753 "dhgroup": "ffdhe6144" 00:15:10.753 } 00:15:10.753 } 00:15:10.753 ]' 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.753 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.754 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.011 11:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.975 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.232 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:12.232 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.232 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.233 11:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.799 00:15:12.799 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.799 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.799 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.065 { 00:15:13.065 "cntlid": 39, 00:15:13.065 "qid": 0, 00:15:13.065 "state": "enabled", 00:15:13.065 "listen_address": { 00:15:13.065 "trtype": "TCP", 00:15:13.065 "adrfam": "IPv4", 00:15:13.065 "traddr": "10.0.0.2", 00:15:13.065 "trsvcid": "4420" 00:15:13.065 }, 00:15:13.065 "peer_address": { 00:15:13.065 "trtype": "TCP", 00:15:13.065 "adrfam": "IPv4", 00:15:13.065 "traddr": "10.0.0.1", 00:15:13.065 "trsvcid": "45564" 00:15:13.065 }, 00:15:13.065 "auth": { 00:15:13.065 "state": "completed", 00:15:13.065 "digest": "sha256", 00:15:13.065 "dhgroup": "ffdhe6144" 00:15:13.065 } 00:15:13.065 } 00:15:13.065 ]' 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.065 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.325 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.325 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.325 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.325 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.325 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.584 11:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:14.521 11:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.780 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.718 00:15:15.718 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.718 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.718 11:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.718 { 00:15:15.718 "cntlid": 41, 00:15:15.718 "qid": 0, 00:15:15.718 "state": "enabled", 00:15:15.718 "listen_address": { 00:15:15.718 "trtype": "TCP", 00:15:15.718 "adrfam": "IPv4", 00:15:15.718 "traddr": "10.0.0.2", 00:15:15.718 "trsvcid": "4420" 00:15:15.718 }, 00:15:15.718 "peer_address": { 00:15:15.718 "trtype": "TCP", 00:15:15.718 "adrfam": "IPv4", 00:15:15.718 "traddr": "10.0.0.1", 00:15:15.718 "trsvcid": "45576" 00:15:15.718 }, 00:15:15.718 "auth": { 00:15:15.718 "state": "completed", 00:15:15.718 "digest": "sha256", 00:15:15.718 "dhgroup": "ffdhe8192" 00:15:15.718 } 00:15:15.718 } 00:15:15.718 ]' 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.718 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.976 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:15.976 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.976 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.976 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.976 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.234 11:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:17.170 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.428 11:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.367 00:15:18.367 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.367 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.367 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.625 { 00:15:18.625 "cntlid": 43, 00:15:18.625 "qid": 0, 00:15:18.625 "state": "enabled", 00:15:18.625 "listen_address": { 00:15:18.625 "trtype": "TCP", 00:15:18.625 "adrfam": "IPv4", 00:15:18.625 "traddr": "10.0.0.2", 00:15:18.625 "trsvcid": "4420" 00:15:18.625 }, 00:15:18.625 "peer_address": { 00:15:18.625 "trtype": "TCP", 00:15:18.625 "adrfam": "IPv4", 00:15:18.625 "traddr": "10.0.0.1", 00:15:18.625 "trsvcid": "45608" 00:15:18.625 }, 00:15:18.625 "auth": { 00:15:18.625 "state": "completed", 00:15:18.625 "digest": "sha256", 00:15:18.625 "dhgroup": "ffdhe8192" 00:15:18.625 } 00:15:18.625 } 00:15:18.625 ]' 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.625 11:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.625 11:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.625 11:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.625 11:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.883 11:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.821 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.078 11:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.013 00:15:21.013 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.013 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.013 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.271 { 00:15:21.271 "cntlid": 45, 00:15:21.271 "qid": 0, 00:15:21.271 "state": "enabled", 00:15:21.271 "listen_address": { 00:15:21.271 "trtype": "TCP", 00:15:21.271 "adrfam": "IPv4", 00:15:21.271 "traddr": "10.0.0.2", 00:15:21.271 "trsvcid": "4420" 00:15:21.271 }, 00:15:21.271 "peer_address": { 00:15:21.271 "trtype": "TCP", 00:15:21.271 "adrfam": "IPv4", 00:15:21.271 "traddr": "10.0.0.1", 00:15:21.271 "trsvcid": "49276" 00:15:21.271 }, 00:15:21.271 "auth": { 00:15:21.271 "state": "completed", 00:15:21.271 "digest": "sha256", 00:15:21.271 "dhgroup": "ffdhe8192" 00:15:21.271 } 00:15:21.271 } 00:15:21.271 ]' 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.271 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.529 11:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.462 11:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.721 11:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.659 00:15:23.659 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.659 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.659 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.917 { 00:15:23.917 "cntlid": 47, 00:15:23.917 "qid": 0, 00:15:23.917 "state": "enabled", 00:15:23.917 "listen_address": { 00:15:23.917 "trtype": "TCP", 00:15:23.917 "adrfam": "IPv4", 00:15:23.917 "traddr": "10.0.0.2", 00:15:23.917 "trsvcid": "4420" 00:15:23.917 }, 00:15:23.917 "peer_address": { 00:15:23.917 "trtype": "TCP", 00:15:23.917 "adrfam": "IPv4", 00:15:23.917 "traddr": "10.0.0.1", 00:15:23.917 "trsvcid": "49296" 00:15:23.917 }, 00:15:23.917 "auth": { 00:15:23.917 "state": "completed", 00:15:23.917 "digest": "sha256", 00:15:23.917 "dhgroup": "ffdhe8192" 00:15:23.917 } 00:15:23.917 } 00:15:23.917 ]' 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.917 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.175 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.175 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.175 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.175 11:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.553 11:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.811 00:15:25.811 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.811 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.811 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.069 { 00:15:26.069 "cntlid": 49, 00:15:26.069 "qid": 0, 00:15:26.069 "state": "enabled", 00:15:26.069 "listen_address": { 00:15:26.069 "trtype": "TCP", 00:15:26.069 "adrfam": "IPv4", 00:15:26.069 "traddr": "10.0.0.2", 00:15:26.069 "trsvcid": "4420" 00:15:26.069 }, 00:15:26.069 "peer_address": { 00:15:26.069 "trtype": "TCP", 00:15:26.069 "adrfam": "IPv4", 00:15:26.069 "traddr": "10.0.0.1", 00:15:26.069 "trsvcid": "49314" 00:15:26.069 }, 00:15:26.069 "auth": { 00:15:26.069 "state": "completed", 00:15:26.069 "digest": "sha384", 00:15:26.069 "dhgroup": "null" 00:15:26.069 } 00:15:26.069 } 00:15:26.069 ]' 00:15:26.069 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.327 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.586 11:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.521 11:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.780 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.781 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.781 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.063 00:15:28.063 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.063 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.063 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.321 { 00:15:28.321 "cntlid": 51, 00:15:28.321 "qid": 0, 00:15:28.321 "state": "enabled", 00:15:28.321 "listen_address": { 00:15:28.321 "trtype": "TCP", 00:15:28.321 "adrfam": "IPv4", 00:15:28.321 "traddr": "10.0.0.2", 00:15:28.321 "trsvcid": "4420" 00:15:28.321 }, 00:15:28.321 "peer_address": { 00:15:28.321 "trtype": "TCP", 00:15:28.321 "adrfam": "IPv4", 00:15:28.321 "traddr": "10.0.0.1", 00:15:28.321 "trsvcid": "49344" 00:15:28.321 }, 00:15:28.321 "auth": { 00:15:28.321 "state": "completed", 00:15:28.321 "digest": "sha384", 00:15:28.321 "dhgroup": "null" 00:15:28.321 } 00:15:28.321 } 00:15:28.321 ]' 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.321 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.580 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.580 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.580 11:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.839 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.776 11:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.776 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.345 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.345 { 00:15:30.345 "cntlid": 53, 00:15:30.345 "qid": 0, 00:15:30.345 "state": "enabled", 00:15:30.345 "listen_address": { 00:15:30.345 "trtype": "TCP", 00:15:30.345 "adrfam": "IPv4", 00:15:30.345 "traddr": "10.0.0.2", 00:15:30.345 "trsvcid": "4420" 00:15:30.345 }, 00:15:30.345 "peer_address": { 00:15:30.345 "trtype": "TCP", 00:15:30.345 "adrfam": "IPv4", 00:15:30.345 "traddr": "10.0.0.1", 00:15:30.345 "trsvcid": "39944" 00:15:30.345 }, 00:15:30.345 "auth": { 00:15:30.345 "state": "completed", 00:15:30.345 "digest": "sha384", 00:15:30.345 "dhgroup": "null" 00:15:30.345 } 00:15:30.345 } 00:15:30.345 ]' 00:15:30.345 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.603 11:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.860 11:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.793 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.051 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.309 00:15:32.309 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.309 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.309 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.567 { 00:15:32.567 "cntlid": 55, 00:15:32.567 "qid": 0, 00:15:32.567 "state": "enabled", 00:15:32.567 "listen_address": { 00:15:32.567 "trtype": "TCP", 00:15:32.567 "adrfam": "IPv4", 00:15:32.567 "traddr": "10.0.0.2", 00:15:32.567 "trsvcid": "4420" 00:15:32.567 }, 00:15:32.567 "peer_address": { 00:15:32.567 "trtype": "TCP", 00:15:32.567 "adrfam": "IPv4", 00:15:32.567 "traddr": "10.0.0.1", 00:15:32.567 "trsvcid": "39966" 00:15:32.567 }, 00:15:32.567 "auth": { 00:15:32.567 "state": "completed", 00:15:32.567 "digest": "sha384", 00:15:32.567 "dhgroup": "null" 00:15:32.567 } 00:15:32.567 } 00:15:32.567 ]' 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.567 11:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.567 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.567 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.567 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.567 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.567 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.824 11:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:33.756 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.756 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.756 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.756 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.013 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.013 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.013 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.013 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.013 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.269 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.526 00:15:34.526 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.526 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.526 11:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.784 { 00:15:34.784 "cntlid": 57, 00:15:34.784 "qid": 0, 00:15:34.784 "state": "enabled", 00:15:34.784 "listen_address": { 00:15:34.784 "trtype": "TCP", 00:15:34.784 "adrfam": "IPv4", 00:15:34.784 "traddr": "10.0.0.2", 00:15:34.784 "trsvcid": "4420" 00:15:34.784 }, 00:15:34.784 "peer_address": { 00:15:34.784 "trtype": "TCP", 00:15:34.784 "adrfam": "IPv4", 00:15:34.784 "traddr": "10.0.0.1", 00:15:34.784 "trsvcid": "39996" 00:15:34.784 }, 00:15:34.784 "auth": { 00:15:34.784 "state": "completed", 00:15:34.784 "digest": "sha384", 00:15:34.784 "dhgroup": "ffdhe2048" 00:15:34.784 } 00:15:34.784 } 00:15:34.784 ]' 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.784 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.349 11:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.281 11:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.846 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.846 { 00:15:36.846 "cntlid": 59, 00:15:36.846 "qid": 0, 00:15:36.846 "state": "enabled", 00:15:36.846 "listen_address": { 00:15:36.846 "trtype": "TCP", 00:15:36.846 "adrfam": "IPv4", 00:15:36.846 "traddr": "10.0.0.2", 00:15:36.846 "trsvcid": "4420" 00:15:36.846 }, 00:15:36.846 "peer_address": { 00:15:36.846 "trtype": "TCP", 00:15:36.846 "adrfam": "IPv4", 00:15:36.846 "traddr": "10.0.0.1", 00:15:36.846 "trsvcid": "40014" 00:15:36.846 }, 00:15:36.846 "auth": { 00:15:36.846 "state": "completed", 00:15:36.846 "digest": "sha384", 00:15:36.846 "dhgroup": "ffdhe2048" 00:15:36.846 } 00:15:36.846 } 00:15:36.846 ]' 00:15:36.846 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.103 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.360 11:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.292 11:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.293 11:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.293 11:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.857 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.114 00:15:39.114 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.114 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.114 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.372 { 00:15:39.372 "cntlid": 61, 00:15:39.372 "qid": 0, 00:15:39.372 "state": "enabled", 00:15:39.372 "listen_address": { 00:15:39.372 "trtype": "TCP", 00:15:39.372 "adrfam": "IPv4", 00:15:39.372 "traddr": "10.0.0.2", 00:15:39.372 "trsvcid": "4420" 00:15:39.372 }, 00:15:39.372 "peer_address": { 00:15:39.372 "trtype": "TCP", 00:15:39.372 "adrfam": "IPv4", 00:15:39.372 "traddr": "10.0.0.1", 00:15:39.372 "trsvcid": "36238" 00:15:39.372 }, 00:15:39.372 "auth": { 00:15:39.372 "state": "completed", 00:15:39.372 "digest": "sha384", 00:15:39.372 "dhgroup": "ffdhe2048" 00:15:39.372 } 00:15:39.372 } 00:15:39.372 ]' 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.372 11:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.631 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.563 11:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.821 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.822 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.822 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.386 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.386 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.644 { 00:15:41.644 "cntlid": 63, 00:15:41.644 "qid": 0, 00:15:41.644 "state": "enabled", 00:15:41.644 "listen_address": { 00:15:41.644 "trtype": "TCP", 00:15:41.644 "adrfam": "IPv4", 00:15:41.644 "traddr": "10.0.0.2", 00:15:41.644 "trsvcid": "4420" 00:15:41.644 }, 00:15:41.644 "peer_address": { 00:15:41.644 "trtype": "TCP", 00:15:41.644 "adrfam": "IPv4", 00:15:41.644 "traddr": "10.0.0.1", 00:15:41.644 "trsvcid": "36266" 00:15:41.644 }, 00:15:41.644 "auth": { 00:15:41.644 "state": "completed", 00:15:41.644 "digest": "sha384", 00:15:41.644 "dhgroup": "ffdhe2048" 00:15:41.644 } 00:15:41.644 } 00:15:41.644 ]' 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.644 11:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.901 11:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:42.832 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.090 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.347 00:15:43.347 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.347 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.347 11:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.605 { 00:15:43.605 "cntlid": 65, 00:15:43.605 "qid": 0, 00:15:43.605 "state": "enabled", 00:15:43.605 "listen_address": { 00:15:43.605 "trtype": "TCP", 00:15:43.605 "adrfam": "IPv4", 00:15:43.605 "traddr": "10.0.0.2", 00:15:43.605 "trsvcid": "4420" 00:15:43.605 }, 00:15:43.605 "peer_address": { 00:15:43.605 "trtype": "TCP", 00:15:43.605 "adrfam": "IPv4", 00:15:43.605 "traddr": "10.0.0.1", 00:15:43.605 "trsvcid": "36284" 00:15:43.605 }, 00:15:43.605 "auth": { 00:15:43.605 "state": "completed", 00:15:43.605 "digest": "sha384", 00:15:43.605 "dhgroup": "ffdhe3072" 00:15:43.605 } 00:15:43.605 } 00:15:43.605 ]' 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.605 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.885 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.885 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.885 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.885 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.885 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.146 11:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.077 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.335 11:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.593 00:15:45.593 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.593 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.593 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.851 { 00:15:45.851 "cntlid": 67, 00:15:45.851 "qid": 0, 00:15:45.851 "state": "enabled", 00:15:45.851 "listen_address": { 00:15:45.851 "trtype": "TCP", 00:15:45.851 "adrfam": "IPv4", 00:15:45.851 "traddr": "10.0.0.2", 00:15:45.851 "trsvcid": "4420" 00:15:45.851 }, 00:15:45.851 "peer_address": { 00:15:45.851 "trtype": "TCP", 00:15:45.851 "adrfam": "IPv4", 00:15:45.851 "traddr": "10.0.0.1", 00:15:45.851 "trsvcid": "36298" 00:15:45.851 }, 00:15:45.851 "auth": { 00:15:45.851 "state": "completed", 00:15:45.851 "digest": "sha384", 00:15:45.851 "dhgroup": "ffdhe3072" 00:15:45.851 } 00:15:45.851 } 00:15:45.851 ]' 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.851 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.109 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.109 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.109 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.109 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.109 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.367 11:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.300 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.558 11:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.815 00:15:47.815 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.815 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.815 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.072 { 00:15:48.072 "cntlid": 69, 00:15:48.072 "qid": 0, 00:15:48.072 "state": "enabled", 00:15:48.072 "listen_address": { 00:15:48.072 "trtype": "TCP", 00:15:48.072 "adrfam": "IPv4", 00:15:48.072 "traddr": "10.0.0.2", 00:15:48.072 "trsvcid": "4420" 00:15:48.072 }, 00:15:48.072 "peer_address": { 00:15:48.072 "trtype": "TCP", 00:15:48.072 "adrfam": "IPv4", 00:15:48.072 "traddr": "10.0.0.1", 00:15:48.072 "trsvcid": "36332" 00:15:48.072 }, 00:15:48.072 "auth": { 00:15:48.072 "state": "completed", 00:15:48.072 "digest": "sha384", 00:15:48.072 "dhgroup": "ffdhe3072" 00:15:48.072 } 00:15:48.072 } 00:15:48.072 ]' 00:15:48.072 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.328 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.585 11:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.515 11:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.773 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.031 00:15:50.031 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.031 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.031 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.288 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.288 { 00:15:50.288 "cntlid": 71, 00:15:50.288 "qid": 0, 00:15:50.288 "state": "enabled", 00:15:50.288 "listen_address": { 00:15:50.288 "trtype": "TCP", 00:15:50.288 "adrfam": "IPv4", 00:15:50.288 "traddr": "10.0.0.2", 00:15:50.288 "trsvcid": "4420" 00:15:50.288 }, 00:15:50.289 "peer_address": { 00:15:50.289 "trtype": "TCP", 00:15:50.289 "adrfam": "IPv4", 00:15:50.289 "traddr": "10.0.0.1", 00:15:50.289 "trsvcid": "43204" 00:15:50.289 }, 00:15:50.289 "auth": { 00:15:50.289 "state": "completed", 00:15:50.289 "digest": "sha384", 00:15:50.289 "dhgroup": "ffdhe3072" 00:15:50.289 } 00:15:50.289 } 00:15:50.289 ]' 00:15:50.289 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.548 11:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.806 11:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.738 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.739 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.996 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.561 00:15:52.561 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.561 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.561 11:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.819 { 00:15:52.819 "cntlid": 73, 00:15:52.819 "qid": 0, 00:15:52.819 "state": "enabled", 00:15:52.819 "listen_address": { 00:15:52.819 "trtype": "TCP", 00:15:52.819 "adrfam": "IPv4", 00:15:52.819 "traddr": "10.0.0.2", 00:15:52.819 "trsvcid": "4420" 00:15:52.819 }, 00:15:52.819 "peer_address": { 00:15:52.819 "trtype": "TCP", 00:15:52.819 "adrfam": "IPv4", 00:15:52.819 "traddr": "10.0.0.1", 00:15:52.819 "trsvcid": "43238" 00:15:52.819 }, 00:15:52.819 "auth": { 00:15:52.819 "state": "completed", 00:15:52.819 "digest": "sha384", 00:15:52.819 "dhgroup": "ffdhe4096" 00:15:52.819 } 00:15:52.819 } 00:15:52.819 ]' 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.819 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.078 11:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.011 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.269 11:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.270 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.270 11:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.835 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.835 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.835 { 00:15:54.835 "cntlid": 75, 00:15:54.835 "qid": 0, 00:15:54.835 "state": "enabled", 00:15:54.835 "listen_address": { 00:15:54.835 "trtype": "TCP", 00:15:54.835 "adrfam": "IPv4", 00:15:54.835 "traddr": "10.0.0.2", 00:15:54.835 "trsvcid": "4420" 00:15:54.835 }, 00:15:54.835 "peer_address": { 00:15:54.835 "trtype": "TCP", 00:15:54.835 "adrfam": "IPv4", 00:15:54.835 "traddr": "10.0.0.1", 00:15:54.835 "trsvcid": "43270" 00:15:54.835 }, 00:15:54.835 "auth": { 00:15:54.835 "state": "completed", 00:15:54.835 "digest": "sha384", 00:15:54.835 "dhgroup": "ffdhe4096" 00:15:54.835 } 00:15:54.835 } 00:15:54.835 ]' 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.101 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.366 11:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.296 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.554 11:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.812 00:15:57.070 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.070 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.070 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.329 { 00:15:57.329 "cntlid": 77, 00:15:57.329 "qid": 0, 00:15:57.329 "state": "enabled", 00:15:57.329 "listen_address": { 00:15:57.329 "trtype": "TCP", 00:15:57.329 "adrfam": "IPv4", 00:15:57.329 "traddr": "10.0.0.2", 00:15:57.329 "trsvcid": "4420" 00:15:57.329 }, 00:15:57.329 "peer_address": { 00:15:57.329 "trtype": "TCP", 00:15:57.329 "adrfam": "IPv4", 00:15:57.329 "traddr": "10.0.0.1", 00:15:57.329 "trsvcid": "43286" 00:15:57.329 }, 00:15:57.329 "auth": { 00:15:57.329 "state": "completed", 00:15:57.329 "digest": "sha384", 00:15:57.329 "dhgroup": "ffdhe4096" 00:15:57.329 } 00:15:57.329 } 00:15:57.329 ]' 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.329 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.587 11:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.524 11:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.782 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.039 00:15:59.297 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.297 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.297 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.555 { 00:15:59.555 "cntlid": 79, 00:15:59.555 "qid": 0, 00:15:59.555 "state": "enabled", 00:15:59.555 "listen_address": { 00:15:59.555 "trtype": "TCP", 00:15:59.555 "adrfam": "IPv4", 00:15:59.555 "traddr": "10.0.0.2", 00:15:59.555 "trsvcid": "4420" 00:15:59.555 }, 00:15:59.555 "peer_address": { 00:15:59.555 "trtype": "TCP", 00:15:59.555 "adrfam": "IPv4", 00:15:59.555 "traddr": "10.0.0.1", 00:15:59.555 "trsvcid": "56778" 00:15:59.555 }, 00:15:59.555 "auth": { 00:15:59.555 "state": "completed", 00:15:59.555 "digest": "sha384", 00:15:59.555 "dhgroup": "ffdhe4096" 00:15:59.555 } 00:15:59.555 } 00:15:59.555 ]' 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.555 11:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.812 11:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.812 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.070 11:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.636 00:16:01.636 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.636 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.636 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.894 { 00:16:01.894 "cntlid": 81, 00:16:01.894 "qid": 0, 00:16:01.894 "state": "enabled", 00:16:01.894 "listen_address": { 00:16:01.894 "trtype": "TCP", 00:16:01.894 "adrfam": "IPv4", 00:16:01.894 "traddr": "10.0.0.2", 00:16:01.894 "trsvcid": "4420" 00:16:01.894 }, 00:16:01.894 "peer_address": { 00:16:01.894 "trtype": "TCP", 00:16:01.894 "adrfam": "IPv4", 00:16:01.894 "traddr": "10.0.0.1", 00:16:01.894 "trsvcid": "56808" 00:16:01.894 }, 00:16:01.894 "auth": { 00:16:01.894 "state": "completed", 00:16:01.894 "digest": "sha384", 00:16:01.894 "dhgroup": "ffdhe6144" 00:16:01.894 } 00:16:01.894 } 00:16:01.894 ]' 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.894 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.152 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.153 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.153 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.411 11:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.344 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.602 11:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.166 00:16:04.166 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.166 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.166 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.423 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.423 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.423 11:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.424 { 00:16:04.424 "cntlid": 83, 00:16:04.424 "qid": 0, 00:16:04.424 "state": "enabled", 00:16:04.424 "listen_address": { 00:16:04.424 "trtype": "TCP", 00:16:04.424 "adrfam": "IPv4", 00:16:04.424 "traddr": "10.0.0.2", 00:16:04.424 "trsvcid": "4420" 00:16:04.424 }, 00:16:04.424 "peer_address": { 00:16:04.424 "trtype": "TCP", 00:16:04.424 "adrfam": "IPv4", 00:16:04.424 "traddr": "10.0.0.1", 00:16:04.424 "trsvcid": "56842" 00:16:04.424 }, 00:16:04.424 "auth": { 00:16:04.424 "state": "completed", 00:16:04.424 "digest": "sha384", 00:16:04.424 "dhgroup": "ffdhe6144" 00:16:04.424 } 00:16:04.424 } 00:16:04.424 ]' 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.424 11:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.682 11:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:05.615 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.615 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.615 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.615 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.873 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.873 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.873 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.873 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.131 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.697 00:16:06.697 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.697 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.697 11:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.697 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.697 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.697 11:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.697 11:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.955 { 00:16:06.955 "cntlid": 85, 00:16:06.955 "qid": 0, 00:16:06.955 "state": "enabled", 00:16:06.955 "listen_address": { 00:16:06.955 "trtype": "TCP", 00:16:06.955 "adrfam": "IPv4", 00:16:06.955 "traddr": "10.0.0.2", 00:16:06.955 "trsvcid": "4420" 00:16:06.955 }, 00:16:06.955 "peer_address": { 00:16:06.955 "trtype": "TCP", 00:16:06.955 "adrfam": "IPv4", 00:16:06.955 "traddr": "10.0.0.1", 00:16:06.955 "trsvcid": "56862" 00:16:06.955 }, 00:16:06.955 "auth": { 00:16:06.955 "state": "completed", 00:16:06.955 "digest": "sha384", 00:16:06.955 "dhgroup": "ffdhe6144" 00:16:06.955 } 00:16:06.955 } 00:16:06.955 ]' 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.955 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.214 11:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.146 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.404 11:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.969 00:16:08.969 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.969 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.969 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.227 { 00:16:09.227 "cntlid": 87, 00:16:09.227 "qid": 0, 00:16:09.227 "state": "enabled", 00:16:09.227 "listen_address": { 00:16:09.227 "trtype": "TCP", 00:16:09.227 "adrfam": "IPv4", 00:16:09.227 "traddr": "10.0.0.2", 00:16:09.227 "trsvcid": "4420" 00:16:09.227 }, 00:16:09.227 "peer_address": { 00:16:09.227 "trtype": "TCP", 00:16:09.227 "adrfam": "IPv4", 00:16:09.227 "traddr": "10.0.0.1", 00:16:09.227 "trsvcid": "49542" 00:16:09.227 }, 00:16:09.227 "auth": { 00:16:09.227 "state": "completed", 00:16:09.227 "digest": "sha384", 00:16:09.227 "dhgroup": "ffdhe6144" 00:16:09.227 } 00:16:09.227 } 00:16:09.227 ]' 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.227 11:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.792 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.733 11:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.733 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:10.733 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.733 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.733 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:10.733 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.734 11:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.668 00:16:11.668 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.668 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.668 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.925 { 00:16:11.925 "cntlid": 89, 00:16:11.925 "qid": 0, 00:16:11.925 "state": "enabled", 00:16:11.925 "listen_address": { 00:16:11.925 "trtype": "TCP", 00:16:11.925 "adrfam": "IPv4", 00:16:11.925 "traddr": "10.0.0.2", 00:16:11.925 "trsvcid": "4420" 00:16:11.925 }, 00:16:11.925 "peer_address": { 00:16:11.925 "trtype": "TCP", 00:16:11.925 "adrfam": "IPv4", 00:16:11.925 "traddr": "10.0.0.1", 00:16:11.925 "trsvcid": "49584" 00:16:11.925 }, 00:16:11.925 "auth": { 00:16:11.925 "state": "completed", 00:16:11.925 "digest": "sha384", 00:16:11.925 "dhgroup": "ffdhe8192" 00:16:11.925 } 00:16:11.925 } 00:16:11.925 ]' 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.925 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.181 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.182 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.182 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.182 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.182 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.439 11:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.372 11:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.630 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.558 00:16:14.558 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.558 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.558 11:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.815 { 00:16:14.815 "cntlid": 91, 00:16:14.815 "qid": 0, 00:16:14.815 "state": "enabled", 00:16:14.815 "listen_address": { 00:16:14.815 "trtype": "TCP", 00:16:14.815 "adrfam": "IPv4", 00:16:14.815 "traddr": "10.0.0.2", 00:16:14.815 "trsvcid": "4420" 00:16:14.815 }, 00:16:14.815 "peer_address": { 00:16:14.815 "trtype": "TCP", 00:16:14.815 "adrfam": "IPv4", 00:16:14.815 "traddr": "10.0.0.1", 00:16:14.815 "trsvcid": "49602" 00:16:14.815 }, 00:16:14.815 "auth": { 00:16:14.815 "state": "completed", 00:16:14.815 "digest": "sha384", 00:16:14.815 "dhgroup": "ffdhe8192" 00:16:14.815 } 00:16:14.815 } 00:16:14.815 ]' 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.815 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.071 11:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:16.003 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.261 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.519 11:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.487 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.487 { 00:16:17.487 "cntlid": 93, 00:16:17.487 "qid": 0, 00:16:17.487 "state": "enabled", 00:16:17.487 "listen_address": { 00:16:17.487 "trtype": "TCP", 00:16:17.487 "adrfam": "IPv4", 00:16:17.487 "traddr": "10.0.0.2", 00:16:17.487 "trsvcid": "4420" 00:16:17.487 }, 00:16:17.487 "peer_address": { 00:16:17.487 "trtype": "TCP", 00:16:17.487 "adrfam": "IPv4", 00:16:17.487 "traddr": "10.0.0.1", 00:16:17.487 "trsvcid": "49644" 00:16:17.487 }, 00:16:17.487 "auth": { 00:16:17.487 "state": "completed", 00:16:17.487 "digest": "sha384", 00:16:17.487 "dhgroup": "ffdhe8192" 00:16:17.487 } 00:16:17.487 } 00:16:17.487 ]' 00:16:17.487 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.488 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.488 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.746 11:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.746 11:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.746 11:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.746 11:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.746 11:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.004 11:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.941 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.199 11:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.134 00:16:20.134 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.134 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.134 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.391 { 00:16:20.391 "cntlid": 95, 00:16:20.391 "qid": 0, 00:16:20.391 "state": "enabled", 00:16:20.391 "listen_address": { 00:16:20.391 "trtype": "TCP", 00:16:20.391 "adrfam": "IPv4", 00:16:20.391 "traddr": "10.0.0.2", 00:16:20.391 "trsvcid": "4420" 00:16:20.391 }, 00:16:20.391 "peer_address": { 00:16:20.391 "trtype": "TCP", 00:16:20.391 "adrfam": "IPv4", 00:16:20.391 "traddr": "10.0.0.1", 00:16:20.391 "trsvcid": "57966" 00:16:20.391 }, 00:16:20.391 "auth": { 00:16:20.391 "state": "completed", 00:16:20.391 "digest": "sha384", 00:16:20.391 "dhgroup": "ffdhe8192" 00:16:20.391 } 00:16:20.391 } 00:16:20.391 ]' 00:16:20.391 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.392 11:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.650 11:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.584 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.841 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.097 00:16:22.097 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.097 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.097 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.353 { 00:16:22.353 "cntlid": 97, 00:16:22.353 "qid": 0, 00:16:22.353 "state": "enabled", 00:16:22.353 "listen_address": { 00:16:22.353 "trtype": "TCP", 00:16:22.353 "adrfam": "IPv4", 00:16:22.353 "traddr": "10.0.0.2", 00:16:22.353 "trsvcid": "4420" 00:16:22.353 }, 00:16:22.353 "peer_address": { 00:16:22.353 "trtype": "TCP", 00:16:22.353 "adrfam": "IPv4", 00:16:22.353 "traddr": "10.0.0.1", 00:16:22.353 "trsvcid": "57978" 00:16:22.353 }, 00:16:22.353 "auth": { 00:16:22.353 "state": "completed", 00:16:22.353 "digest": "sha512", 00:16:22.353 "dhgroup": "null" 00:16:22.353 } 00:16:22.353 } 00:16:22.353 ]' 00:16:22.353 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.610 11:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.867 11:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.802 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.058 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.620 00:16:24.620 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.620 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.620 11:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.620 { 00:16:24.620 "cntlid": 99, 00:16:24.620 "qid": 0, 00:16:24.620 "state": "enabled", 00:16:24.620 "listen_address": { 00:16:24.620 "trtype": "TCP", 00:16:24.620 "adrfam": "IPv4", 00:16:24.620 "traddr": "10.0.0.2", 00:16:24.620 "trsvcid": "4420" 00:16:24.620 }, 00:16:24.620 "peer_address": { 00:16:24.620 "trtype": "TCP", 00:16:24.620 "adrfam": "IPv4", 00:16:24.620 "traddr": "10.0.0.1", 00:16:24.620 "trsvcid": "58008" 00:16:24.620 }, 00:16:24.620 "auth": { 00:16:24.620 "state": "completed", 00:16:24.620 "digest": "sha512", 00:16:24.620 "dhgroup": "null" 00:16:24.620 } 00:16:24.620 } 00:16:24.620 ]' 00:16:24.620 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.877 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.135 11:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.071 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.328 11:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.586 00:16:26.843 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.843 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.843 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.101 { 00:16:27.101 "cntlid": 101, 00:16:27.101 "qid": 0, 00:16:27.101 "state": "enabled", 00:16:27.101 "listen_address": { 00:16:27.101 "trtype": "TCP", 00:16:27.101 "adrfam": "IPv4", 00:16:27.101 "traddr": "10.0.0.2", 00:16:27.101 "trsvcid": "4420" 00:16:27.101 }, 00:16:27.101 "peer_address": { 00:16:27.101 "trtype": "TCP", 00:16:27.101 "adrfam": "IPv4", 00:16:27.101 "traddr": "10.0.0.1", 00:16:27.101 "trsvcid": "58048" 00:16:27.101 }, 00:16:27.101 "auth": { 00:16:27.101 "state": "completed", 00:16:27.101 "digest": "sha512", 00:16:27.101 "dhgroup": "null" 00:16:27.101 } 00:16:27.101 } 00:16:27.101 ]' 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.101 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.358 11:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.289 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.546 11:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.110 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.110 11:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.368 { 00:16:29.368 "cntlid": 103, 00:16:29.368 "qid": 0, 00:16:29.368 "state": "enabled", 00:16:29.368 "listen_address": { 00:16:29.368 "trtype": "TCP", 00:16:29.368 "adrfam": "IPv4", 00:16:29.368 "traddr": "10.0.0.2", 00:16:29.368 "trsvcid": "4420" 00:16:29.368 }, 00:16:29.368 "peer_address": { 00:16:29.368 "trtype": "TCP", 00:16:29.368 "adrfam": "IPv4", 00:16:29.368 "traddr": "10.0.0.1", 00:16:29.368 "trsvcid": "44454" 00:16:29.368 }, 00:16:29.368 "auth": { 00:16:29.368 "state": "completed", 00:16:29.368 "digest": "sha512", 00:16:29.368 "dhgroup": "null" 00:16:29.368 } 00:16:29.368 } 00:16:29.368 ]' 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.368 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.626 11:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.559 11:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.816 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.074 00:16:31.074 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.074 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.074 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.331 { 00:16:31.331 "cntlid": 105, 00:16:31.331 "qid": 0, 00:16:31.331 "state": "enabled", 00:16:31.331 "listen_address": { 00:16:31.331 "trtype": "TCP", 00:16:31.331 "adrfam": "IPv4", 00:16:31.331 "traddr": "10.0.0.2", 00:16:31.331 "trsvcid": "4420" 00:16:31.331 }, 00:16:31.331 "peer_address": { 00:16:31.331 "trtype": "TCP", 00:16:31.331 "adrfam": "IPv4", 00:16:31.331 "traddr": "10.0.0.1", 00:16:31.331 "trsvcid": "44478" 00:16:31.331 }, 00:16:31.331 "auth": { 00:16:31.331 "state": "completed", 00:16:31.331 "digest": "sha512", 00:16:31.331 "dhgroup": "ffdhe2048" 00:16:31.331 } 00:16:31.331 } 00:16:31.331 ]' 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.331 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.588 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.588 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.589 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.589 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.589 11:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.846 11:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.779 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.037 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.324 00:16:33.324 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.324 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.324 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.582 { 00:16:33.582 "cntlid": 107, 00:16:33.582 "qid": 0, 00:16:33.582 "state": "enabled", 00:16:33.582 "listen_address": { 00:16:33.582 "trtype": "TCP", 00:16:33.582 "adrfam": "IPv4", 00:16:33.582 "traddr": "10.0.0.2", 00:16:33.582 "trsvcid": "4420" 00:16:33.582 }, 00:16:33.582 "peer_address": { 00:16:33.582 "trtype": "TCP", 00:16:33.582 "adrfam": "IPv4", 00:16:33.582 "traddr": "10.0.0.1", 00:16:33.582 "trsvcid": "44518" 00:16:33.582 }, 00:16:33.582 "auth": { 00:16:33.582 "state": "completed", 00:16:33.582 "digest": "sha512", 00:16:33.582 "dhgroup": "ffdhe2048" 00:16:33.582 } 00:16:33.582 } 00:16:33.582 ]' 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.582 11:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.582 11:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.582 11:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.582 11:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.838 11:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.770 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.028 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.286 00:16:35.286 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.286 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.286 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.544 { 00:16:35.544 "cntlid": 109, 00:16:35.544 "qid": 0, 00:16:35.544 "state": "enabled", 00:16:35.544 "listen_address": { 00:16:35.544 "trtype": "TCP", 00:16:35.544 "adrfam": "IPv4", 00:16:35.544 "traddr": "10.0.0.2", 00:16:35.544 "trsvcid": "4420" 00:16:35.544 }, 00:16:35.544 "peer_address": { 00:16:35.544 "trtype": "TCP", 00:16:35.544 "adrfam": "IPv4", 00:16:35.544 "traddr": "10.0.0.1", 00:16:35.544 "trsvcid": "44550" 00:16:35.544 }, 00:16:35.544 "auth": { 00:16:35.544 "state": "completed", 00:16:35.544 "digest": "sha512", 00:16:35.544 "dhgroup": "ffdhe2048" 00:16:35.544 } 00:16:35.544 } 00:16:35.544 ]' 00:16:35.544 11:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.802 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.060 11:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.993 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.251 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.508 00:16:37.508 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.508 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.508 11:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.766 { 00:16:37.766 "cntlid": 111, 00:16:37.766 "qid": 0, 00:16:37.766 "state": "enabled", 00:16:37.766 "listen_address": { 00:16:37.766 "trtype": "TCP", 00:16:37.766 "adrfam": "IPv4", 00:16:37.766 "traddr": "10.0.0.2", 00:16:37.766 "trsvcid": "4420" 00:16:37.766 }, 00:16:37.766 "peer_address": { 00:16:37.766 "trtype": "TCP", 00:16:37.766 "adrfam": "IPv4", 00:16:37.766 "traddr": "10.0.0.1", 00:16:37.766 "trsvcid": "44570" 00:16:37.766 }, 00:16:37.766 "auth": { 00:16:37.766 "state": "completed", 00:16:37.766 "digest": "sha512", 00:16:37.766 "dhgroup": "ffdhe2048" 00:16:37.766 } 00:16:37.766 } 00:16:37.766 ]' 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.766 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.024 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.024 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.024 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.024 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.024 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.281 11:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.215 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.472 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:39.472 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.472 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.472 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:39.472 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.473 11:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.730 00:16:39.730 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.730 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.730 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.988 { 00:16:39.988 "cntlid": 113, 00:16:39.988 "qid": 0, 00:16:39.988 "state": "enabled", 00:16:39.988 "listen_address": { 00:16:39.988 "trtype": "TCP", 00:16:39.988 "adrfam": "IPv4", 00:16:39.988 "traddr": "10.0.0.2", 00:16:39.988 "trsvcid": "4420" 00:16:39.988 }, 00:16:39.988 "peer_address": { 00:16:39.988 "trtype": "TCP", 00:16:39.988 "adrfam": "IPv4", 00:16:39.988 "traddr": "10.0.0.1", 00:16:39.988 "trsvcid": "36544" 00:16:39.988 }, 00:16:39.988 "auth": { 00:16:39.988 "state": "completed", 00:16:39.988 "digest": "sha512", 00:16:39.988 "dhgroup": "ffdhe3072" 00:16:39.988 } 00:16:39.988 } 00:16:39.988 ]' 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.988 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.246 11:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.178 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.746 11:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.002 00:16:42.002 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.002 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.002 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.260 { 00:16:42.260 "cntlid": 115, 00:16:42.260 "qid": 0, 00:16:42.260 "state": "enabled", 00:16:42.260 "listen_address": { 00:16:42.260 "trtype": "TCP", 00:16:42.260 "adrfam": "IPv4", 00:16:42.260 "traddr": "10.0.0.2", 00:16:42.260 "trsvcid": "4420" 00:16:42.260 }, 00:16:42.260 "peer_address": { 00:16:42.260 "trtype": "TCP", 00:16:42.260 "adrfam": "IPv4", 00:16:42.260 "traddr": "10.0.0.1", 00:16:42.260 "trsvcid": "36570" 00:16:42.260 }, 00:16:42.260 "auth": { 00:16:42.260 "state": "completed", 00:16:42.260 "digest": "sha512", 00:16:42.260 "dhgroup": "ffdhe3072" 00:16:42.260 } 00:16:42.260 } 00:16:42.260 ]' 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.260 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.517 11:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.449 11:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.707 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.270 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.270 { 00:16:44.270 "cntlid": 117, 00:16:44.270 "qid": 0, 00:16:44.270 "state": "enabled", 00:16:44.270 "listen_address": { 00:16:44.270 "trtype": "TCP", 00:16:44.270 "adrfam": "IPv4", 00:16:44.270 "traddr": "10.0.0.2", 00:16:44.270 "trsvcid": "4420" 00:16:44.270 }, 00:16:44.270 "peer_address": { 00:16:44.270 "trtype": "TCP", 00:16:44.270 "adrfam": "IPv4", 00:16:44.270 "traddr": "10.0.0.1", 00:16:44.270 "trsvcid": "36590" 00:16:44.270 }, 00:16:44.270 "auth": { 00:16:44.270 "state": "completed", 00:16:44.270 "digest": "sha512", 00:16:44.270 "dhgroup": "ffdhe3072" 00:16:44.270 } 00:16:44.270 } 00:16:44.270 ]' 00:16:44.270 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.527 11:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.785 11:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.717 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.975 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.233 00:16:46.233 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.233 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.233 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.490 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.490 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.490 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.491 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.491 11:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.491 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.491 { 00:16:46.491 "cntlid": 119, 00:16:46.491 "qid": 0, 00:16:46.491 "state": "enabled", 00:16:46.491 "listen_address": { 00:16:46.491 "trtype": "TCP", 00:16:46.491 "adrfam": "IPv4", 00:16:46.491 "traddr": "10.0.0.2", 00:16:46.491 "trsvcid": "4420" 00:16:46.491 }, 00:16:46.491 "peer_address": { 00:16:46.491 "trtype": "TCP", 00:16:46.491 "adrfam": "IPv4", 00:16:46.491 "traddr": "10.0.0.1", 00:16:46.491 "trsvcid": "36626" 00:16:46.491 }, 00:16:46.491 "auth": { 00:16:46.491 "state": "completed", 00:16:46.491 "digest": "sha512", 00:16:46.491 "dhgroup": "ffdhe3072" 00:16:46.491 } 00:16:46.491 } 00:16:46.491 ]' 00:16:46.491 11:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.748 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.006 11:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.939 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.197 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.455 00:16:48.455 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.455 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.455 11:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.714 { 00:16:48.714 "cntlid": 121, 00:16:48.714 "qid": 0, 00:16:48.714 "state": "enabled", 00:16:48.714 "listen_address": { 00:16:48.714 "trtype": "TCP", 00:16:48.714 "adrfam": "IPv4", 00:16:48.714 "traddr": "10.0.0.2", 00:16:48.714 "trsvcid": "4420" 00:16:48.714 }, 00:16:48.714 "peer_address": { 00:16:48.714 "trtype": "TCP", 00:16:48.714 "adrfam": "IPv4", 00:16:48.714 "traddr": "10.0.0.1", 00:16:48.714 "trsvcid": "45212" 00:16:48.714 }, 00:16:48.714 "auth": { 00:16:48.714 "state": "completed", 00:16:48.714 "digest": "sha512", 00:16:48.714 "dhgroup": "ffdhe4096" 00:16:48.714 } 00:16:48.714 } 00:16:48.714 ]' 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.714 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.972 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.972 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.972 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.972 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.972 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.231 11:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.200 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.457 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.458 11:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.023 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.023 { 00:16:51.023 "cntlid": 123, 00:16:51.023 "qid": 0, 00:16:51.023 "state": "enabled", 00:16:51.023 "listen_address": { 00:16:51.023 "trtype": "TCP", 00:16:51.023 "adrfam": "IPv4", 00:16:51.023 "traddr": "10.0.0.2", 00:16:51.023 "trsvcid": "4420" 00:16:51.023 }, 00:16:51.023 "peer_address": { 00:16:51.023 "trtype": "TCP", 00:16:51.023 "adrfam": "IPv4", 00:16:51.023 "traddr": "10.0.0.1", 00:16:51.023 "trsvcid": "45234" 00:16:51.023 }, 00:16:51.023 "auth": { 00:16:51.023 "state": "completed", 00:16:51.023 "digest": "sha512", 00:16:51.023 "dhgroup": "ffdhe4096" 00:16:51.023 } 00:16:51.023 } 00:16:51.023 ]' 00:16:51.023 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.281 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.539 11:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.471 11:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.728 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:52.728 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.728 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.728 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.728 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.729 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.986 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.244 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.502 { 00:16:53.502 "cntlid": 125, 00:16:53.502 "qid": 0, 00:16:53.502 "state": "enabled", 00:16:53.502 "listen_address": { 00:16:53.502 "trtype": "TCP", 00:16:53.502 "adrfam": "IPv4", 00:16:53.502 "traddr": "10.0.0.2", 00:16:53.502 "trsvcid": "4420" 00:16:53.502 }, 00:16:53.502 "peer_address": { 00:16:53.502 "trtype": "TCP", 00:16:53.502 "adrfam": "IPv4", 00:16:53.502 "traddr": "10.0.0.1", 00:16:53.502 "trsvcid": "45272" 00:16:53.502 }, 00:16:53.502 "auth": { 00:16:53.502 "state": "completed", 00:16:53.502 "digest": "sha512", 00:16:53.502 "dhgroup": "ffdhe4096" 00:16:53.502 } 00:16:53.502 } 00:16:53.502 ]' 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.502 11:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.759 11:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.687 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.943 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.200 00:16:55.200 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.200 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.200 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.458 { 00:16:55.458 "cntlid": 127, 00:16:55.458 "qid": 0, 00:16:55.458 "state": "enabled", 00:16:55.458 "listen_address": { 00:16:55.458 "trtype": "TCP", 00:16:55.458 "adrfam": "IPv4", 00:16:55.458 "traddr": "10.0.0.2", 00:16:55.458 "trsvcid": "4420" 00:16:55.458 }, 00:16:55.458 "peer_address": { 00:16:55.458 "trtype": "TCP", 00:16:55.458 "adrfam": "IPv4", 00:16:55.458 "traddr": "10.0.0.1", 00:16:55.458 "trsvcid": "45294" 00:16:55.458 }, 00:16:55.458 "auth": { 00:16:55.458 "state": "completed", 00:16:55.458 "digest": "sha512", 00:16:55.458 "dhgroup": "ffdhe4096" 00:16:55.458 } 00:16:55.458 } 00:16:55.458 ]' 00:16:55.458 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.716 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.716 11:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.716 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.716 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.716 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.716 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.716 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.973 11:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:16:56.905 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.905 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.906 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.163 11:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.728 00:16:57.728 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.728 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.728 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.985 { 00:16:57.985 "cntlid": 129, 00:16:57.985 "qid": 0, 00:16:57.985 "state": "enabled", 00:16:57.985 "listen_address": { 00:16:57.985 "trtype": "TCP", 00:16:57.985 "adrfam": "IPv4", 00:16:57.985 "traddr": "10.0.0.2", 00:16:57.985 "trsvcid": "4420" 00:16:57.985 }, 00:16:57.985 "peer_address": { 00:16:57.985 "trtype": "TCP", 00:16:57.985 "adrfam": "IPv4", 00:16:57.985 "traddr": "10.0.0.1", 00:16:57.985 "trsvcid": "45306" 00:16:57.985 }, 00:16:57.985 "auth": { 00:16:57.985 "state": "completed", 00:16:57.985 "digest": "sha512", 00:16:57.985 "dhgroup": "ffdhe6144" 00:16:57.985 } 00:16:57.985 } 00:16:57.985 ]' 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.985 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.242 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.242 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.242 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.242 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.242 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.499 11:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.430 11:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.687 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.250 00:17:00.250 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.250 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.250 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.506 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.506 { 00:17:00.506 "cntlid": 131, 00:17:00.506 "qid": 0, 00:17:00.506 "state": "enabled", 00:17:00.506 "listen_address": { 00:17:00.506 "trtype": "TCP", 00:17:00.506 "adrfam": "IPv4", 00:17:00.506 "traddr": "10.0.0.2", 00:17:00.506 "trsvcid": "4420" 00:17:00.506 }, 00:17:00.506 "peer_address": { 00:17:00.506 "trtype": "TCP", 00:17:00.506 "adrfam": "IPv4", 00:17:00.506 "traddr": "10.0.0.1", 00:17:00.506 "trsvcid": "40110" 00:17:00.506 }, 00:17:00.506 "auth": { 00:17:00.506 "state": "completed", 00:17:00.506 "digest": "sha512", 00:17:00.506 "dhgroup": "ffdhe6144" 00:17:00.507 } 00:17:00.507 } 00:17:00.507 ]' 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.507 11:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.764 11:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:01.694 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.953 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.518 00:17:02.518 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.518 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.518 11:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.776 { 00:17:02.776 "cntlid": 133, 00:17:02.776 "qid": 0, 00:17:02.776 "state": "enabled", 00:17:02.776 "listen_address": { 00:17:02.776 "trtype": "TCP", 00:17:02.776 "adrfam": "IPv4", 00:17:02.776 "traddr": "10.0.0.2", 00:17:02.776 "trsvcid": "4420" 00:17:02.776 }, 00:17:02.776 "peer_address": { 00:17:02.776 "trtype": "TCP", 00:17:02.776 "adrfam": "IPv4", 00:17:02.776 "traddr": "10.0.0.1", 00:17:02.776 "trsvcid": "40142" 00:17:02.776 }, 00:17:02.776 "auth": { 00:17:02.776 "state": "completed", 00:17:02.776 "digest": "sha512", 00:17:02.776 "dhgroup": "ffdhe6144" 00:17:02.776 } 00:17:02.776 } 00:17:02.776 ]' 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.776 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.033 11:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.403 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.404 11:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.967 00:17:04.967 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.967 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.967 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.224 { 00:17:05.224 "cntlid": 135, 00:17:05.224 "qid": 0, 00:17:05.224 "state": "enabled", 00:17:05.224 "listen_address": { 00:17:05.224 "trtype": "TCP", 00:17:05.224 "adrfam": "IPv4", 00:17:05.224 "traddr": "10.0.0.2", 00:17:05.224 "trsvcid": "4420" 00:17:05.224 }, 00:17:05.224 "peer_address": { 00:17:05.224 "trtype": "TCP", 00:17:05.224 "adrfam": "IPv4", 00:17:05.224 "traddr": "10.0.0.1", 00:17:05.224 "trsvcid": "40180" 00:17:05.224 }, 00:17:05.224 "auth": { 00:17:05.224 "state": "completed", 00:17:05.224 "digest": "sha512", 00:17:05.224 "dhgroup": "ffdhe6144" 00:17:05.224 } 00:17:05.224 } 00:17:05.224 ]' 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.224 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.481 11:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:17:06.454 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.454 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.454 11:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.454 11:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.455 11:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.455 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.455 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.455 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.455 11:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.712 11:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.646 00:17:07.646 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.646 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.646 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.904 { 00:17:07.904 "cntlid": 137, 00:17:07.904 "qid": 0, 00:17:07.904 "state": "enabled", 00:17:07.904 "listen_address": { 00:17:07.904 "trtype": "TCP", 00:17:07.904 "adrfam": "IPv4", 00:17:07.904 "traddr": "10.0.0.2", 00:17:07.904 "trsvcid": "4420" 00:17:07.904 }, 00:17:07.904 "peer_address": { 00:17:07.904 "trtype": "TCP", 00:17:07.904 "adrfam": "IPv4", 00:17:07.904 "traddr": "10.0.0.1", 00:17:07.904 "trsvcid": "40210" 00:17:07.904 }, 00:17:07.904 "auth": { 00:17:07.904 "state": "completed", 00:17:07.904 "digest": "sha512", 00:17:07.904 "dhgroup": "ffdhe8192" 00:17:07.904 } 00:17:07.904 } 00:17:07.904 ]' 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.904 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.162 11:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:17:09.096 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.096 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.096 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.096 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.357 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.357 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.357 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.358 11:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.292 00:17:10.292 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.292 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.292 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.550 { 00:17:10.550 "cntlid": 139, 00:17:10.550 "qid": 0, 00:17:10.550 "state": "enabled", 00:17:10.550 "listen_address": { 00:17:10.550 "trtype": "TCP", 00:17:10.550 "adrfam": "IPv4", 00:17:10.550 "traddr": "10.0.0.2", 00:17:10.550 "trsvcid": "4420" 00:17:10.550 }, 00:17:10.550 "peer_address": { 00:17:10.550 "trtype": "TCP", 00:17:10.550 "adrfam": "IPv4", 00:17:10.550 "traddr": "10.0.0.1", 00:17:10.550 "trsvcid": "37754" 00:17:10.550 }, 00:17:10.550 "auth": { 00:17:10.550 "state": "completed", 00:17:10.550 "digest": "sha512", 00:17:10.550 "dhgroup": "ffdhe8192" 00:17:10.550 } 00:17:10.550 } 00:17:10.550 ]' 00:17:10.550 11:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.550 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.550 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.550 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.550 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.808 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.808 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.808 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.066 11:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTBiOTYxOTJjOGNmODQyNDBmZmJhM2QwZDBmYzgxNmVXEdel: --dhchap-ctrl-secret DHHC-1:02:OTI4MDgzOTk2MzQxZDk2YTQ5NmM2NGJhYzk4M2EzNjU0NjM2MDEyMDE3NjkzYWMxLX8ofw==: 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.999 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.000 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.258 11:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.192 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.192 { 00:17:13.192 "cntlid": 141, 00:17:13.192 "qid": 0, 00:17:13.192 "state": "enabled", 00:17:13.192 "listen_address": { 00:17:13.192 "trtype": "TCP", 00:17:13.192 "adrfam": "IPv4", 00:17:13.192 "traddr": "10.0.0.2", 00:17:13.192 "trsvcid": "4420" 00:17:13.192 }, 00:17:13.192 "peer_address": { 00:17:13.192 "trtype": "TCP", 00:17:13.192 "adrfam": "IPv4", 00:17:13.192 "traddr": "10.0.0.1", 00:17:13.192 "trsvcid": "37794" 00:17:13.192 }, 00:17:13.192 "auth": { 00:17:13.192 "state": "completed", 00:17:13.192 "digest": "sha512", 00:17:13.192 "dhgroup": "ffdhe8192" 00:17:13.192 } 00:17:13.192 } 00:17:13.192 ]' 00:17:13.192 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.450 11:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.711 11:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTYwMmI4MDQ0ODUyYTFiNGFiMzQzMzRmNzFiNzAxMWQ4YjY5NzQ5NjI0YzIwMjFha6CFAQ==: --dhchap-ctrl-secret DHHC-1:01:NjgzMTA4ZmNjNzkyNjFiMTFkN2FiYzBmMTE1YmFkM2ZcOO9m: 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.643 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.901 11:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.835 00:17:15.835 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.835 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.835 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.092 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.092 { 00:17:16.092 "cntlid": 143, 00:17:16.092 "qid": 0, 00:17:16.092 "state": "enabled", 00:17:16.092 "listen_address": { 00:17:16.092 "trtype": "TCP", 00:17:16.092 "adrfam": "IPv4", 00:17:16.092 "traddr": "10.0.0.2", 00:17:16.092 "trsvcid": "4420" 00:17:16.092 }, 00:17:16.092 "peer_address": { 00:17:16.092 "trtype": "TCP", 00:17:16.092 "adrfam": "IPv4", 00:17:16.092 "traddr": "10.0.0.1", 00:17:16.092 "trsvcid": "37828" 00:17:16.092 }, 00:17:16.092 "auth": { 00:17:16.092 "state": "completed", 00:17:16.092 "digest": "sha512", 00:17:16.092 "dhgroup": "ffdhe8192" 00:17:16.092 } 00:17:16.093 } 00:17:16.093 ]' 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.093 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.350 11:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.284 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.542 11:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.476 00:17:18.476 11:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.476 11:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.476 11:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.733 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.734 { 00:17:18.734 "cntlid": 145, 00:17:18.734 "qid": 0, 00:17:18.734 "state": "enabled", 00:17:18.734 "listen_address": { 00:17:18.734 "trtype": "TCP", 00:17:18.734 "adrfam": "IPv4", 00:17:18.734 "traddr": "10.0.0.2", 00:17:18.734 "trsvcid": "4420" 00:17:18.734 }, 00:17:18.734 "peer_address": { 00:17:18.734 "trtype": "TCP", 00:17:18.734 "adrfam": "IPv4", 00:17:18.734 "traddr": "10.0.0.1", 00:17:18.734 "trsvcid": "37860" 00:17:18.734 }, 00:17:18.734 "auth": { 00:17:18.734 "state": "completed", 00:17:18.734 "digest": "sha512", 00:17:18.734 "dhgroup": "ffdhe8192" 00:17:18.734 } 00:17:18.734 } 00:17:18.734 ]' 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.734 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.991 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.991 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.991 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.249 11:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjY3Mjc4NjVhMmE2MjVkNzZkN2VhNGM2ZWU3OWU5ODMwNTE5MjIzNjAyYWMyYmM35MgCGQ==: --dhchap-ctrl-secret DHHC-1:03:MGU0Mjg1OWQ3ODQ0NmMxY2MxNWFkODljZjE1NTYxN2M0MGE1ZDAwMTEzM2RlMDJlYmJkMzgwN2E0NDQ4OGRkZbpRGgA=: 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.182 11:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.116 request: 00:17:21.116 { 00:17:21.116 "name": "nvme0", 00:17:21.116 "trtype": "tcp", 00:17:21.116 "traddr": "10.0.0.2", 00:17:21.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.116 "adrfam": "ipv4", 00:17:21.116 "trsvcid": "4420", 00:17:21.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.116 "dhchap_key": "key2", 00:17:21.116 "method": "bdev_nvme_attach_controller", 00:17:21.116 "req_id": 1 00:17:21.116 } 00:17:21.116 Got JSON-RPC error response 00:17:21.116 response: 00:17:21.116 { 00:17:21.116 "code": -5, 00:17:21.116 "message": "Input/output error" 00:17:21.116 } 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.116 11:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.682 request: 00:17:21.682 { 00:17:21.682 "name": "nvme0", 00:17:21.682 "trtype": "tcp", 00:17:21.682 "traddr": "10.0.0.2", 00:17:21.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.682 "adrfam": "ipv4", 00:17:21.682 "trsvcid": "4420", 00:17:21.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.682 "dhchap_key": "key1", 00:17:21.682 "dhchap_ctrlr_key": "ckey2", 00:17:21.682 "method": "bdev_nvme_attach_controller", 00:17:21.682 "req_id": 1 00:17:21.682 } 00:17:21.682 Got JSON-RPC error response 00:17:21.682 response: 00:17:21.682 { 00:17:21.682 "code": -5, 00:17:21.682 "message": "Input/output error" 00:17:21.682 } 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.682 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:21.683 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:21.683 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:21.683 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:21.683 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.683 11:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.615 request: 00:17:22.615 { 00:17:22.615 "name": "nvme0", 00:17:22.615 "trtype": "tcp", 00:17:22.615 "traddr": "10.0.0.2", 00:17:22.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:22.615 "adrfam": "ipv4", 00:17:22.615 "trsvcid": "4420", 00:17:22.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.615 "dhchap_key": "key1", 00:17:22.615 "dhchap_ctrlr_key": "ckey1", 00:17:22.615 "method": "bdev_nvme_attach_controller", 00:17:22.615 "req_id": 1 00:17:22.615 } 00:17:22.615 Got JSON-RPC error response 00:17:22.615 response: 00:17:22.615 { 00:17:22.615 "code": -5, 00:17:22.615 "message": "Input/output error" 00:17:22.615 } 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.615 11:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 916500 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 916500 ']' 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 916500 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 916500 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 916500' 00:17:22.615 killing process with pid 916500 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 916500 00:17:22.615 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 916500 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=938916 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 938916 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 938916 ']' 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:22.887 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 938916 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 938916 ']' 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:23.171 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.437 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:23.437 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:23.438 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:23.438 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.438 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.696 11:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.627 00:17:24.627 11:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.627 11:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.627 11:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.884 { 00:17:24.884 "cntlid": 1, 00:17:24.884 "qid": 0, 00:17:24.884 "state": "enabled", 00:17:24.884 "listen_address": { 00:17:24.884 "trtype": "TCP", 00:17:24.884 "adrfam": "IPv4", 00:17:24.884 "traddr": "10.0.0.2", 00:17:24.884 "trsvcid": "4420" 00:17:24.884 }, 00:17:24.884 "peer_address": { 00:17:24.884 "trtype": "TCP", 00:17:24.884 "adrfam": "IPv4", 00:17:24.884 "traddr": "10.0.0.1", 00:17:24.884 "trsvcid": "35832" 00:17:24.884 }, 00:17:24.884 "auth": { 00:17:24.884 "state": "completed", 00:17:24.884 "digest": "sha512", 00:17:24.884 "dhgroup": "ffdhe8192" 00:17:24.884 } 00:17:24.884 } 00:17:24.884 ]' 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.884 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.885 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.885 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.885 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.885 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.885 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.142 11:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzM5ZTc5MjhmMjMxMWExNGFlMDlhNjQzMDQwM2VmNzU3MzRjNzdlMjU1ZjA5ZWVkNTdiYTA1NGUzOWYzZDFkZiQZdI8=: 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:26.076 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.642 11:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.642 request: 00:17:26.642 { 00:17:26.642 "name": "nvme0", 00:17:26.642 "trtype": "tcp", 00:17:26.642 "traddr": "10.0.0.2", 00:17:26.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:26.642 "adrfam": "ipv4", 00:17:26.642 "trsvcid": "4420", 00:17:26.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:26.642 "dhchap_key": "key3", 00:17:26.642 "method": "bdev_nvme_attach_controller", 00:17:26.642 "req_id": 1 00:17:26.642 } 00:17:26.642 Got JSON-RPC error response 00:17:26.642 response: 00:17:26.642 { 00:17:26.642 "code": -5, 00:17:26.642 "message": "Input/output error" 00:17:26.642 } 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.901 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.159 request: 00:17:27.159 { 00:17:27.159 "name": "nvme0", 00:17:27.159 "trtype": "tcp", 00:17:27.159 "traddr": "10.0.0.2", 00:17:27.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:27.159 "adrfam": "ipv4", 00:17:27.159 "trsvcid": "4420", 00:17:27.159 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.159 "dhchap_key": "key3", 00:17:27.159 "method": "bdev_nvme_attach_controller", 00:17:27.159 "req_id": 1 00:17:27.159 } 00:17:27.159 Got JSON-RPC error response 00:17:27.159 response: 00:17:27.159 { 00:17:27.159 "code": -5, 00:17:27.159 "message": "Input/output error" 00:17:27.159 } 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.159 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.723 11:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.981 request: 00:17:27.981 { 00:17:27.981 "name": "nvme0", 00:17:27.981 "trtype": "tcp", 00:17:27.981 "traddr": "10.0.0.2", 00:17:27.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:27.981 "adrfam": "ipv4", 00:17:27.981 "trsvcid": "4420", 00:17:27.981 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.981 "dhchap_key": "key0", 00:17:27.981 "dhchap_ctrlr_key": "key1", 00:17:27.981 "method": "bdev_nvme_attach_controller", 00:17:27.981 "req_id": 1 00:17:27.981 } 00:17:27.981 Got JSON-RPC error response 00:17:27.981 response: 00:17:27.981 { 00:17:27.981 "code": -5, 00:17:27.981 "message": "Input/output error" 00:17:27.981 } 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:27.981 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:28.238 00:17:28.238 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:28.238 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:28.238 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.496 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.496 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.496 11:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 916635 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 916635 ']' 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 916635 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 916635 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 916635' 00:17:28.754 killing process with pid 916635 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 916635 00:17:28.754 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 916635 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.319 rmmod nvme_tcp 00:17:29.319 rmmod nvme_fabrics 00:17:29.319 rmmod nvme_keyring 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 938916 ']' 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 938916 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 938916 ']' 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 938916 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 938916 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 938916' 00:17:29.319 killing process with pid 938916 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 938916 00:17:29.319 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 938916 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.577 11:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.478 11:54:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.478 11:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Th9 /tmp/spdk.key-sha256.f6n /tmp/spdk.key-sha384.wfc /tmp/spdk.key-sha512.wj3 /tmp/spdk.key-sha512.6MK /tmp/spdk.key-sha384.Klr /tmp/spdk.key-sha256.662 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:31.478 00:17:31.478 real 3m7.618s 00:17:31.478 user 7m16.892s 00:17:31.478 sys 0m24.995s 00:17:31.478 11:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:31.478 11:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.479 ************************************ 00:17:31.479 END TEST nvmf_auth_target 00:17:31.479 ************************************ 00:17:31.737 11:54:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:31.737 11:54:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.737 11:54:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:17:31.737 11:54:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:31.737 11:54:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 ************************************ 00:17:31.737 START TEST nvmf_bdevio_no_huge 00:17:31.737 ************************************ 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.737 * Looking for test storage... 00:17:31.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.737 11:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:33.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:33.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:33.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:33.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.657 11:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:17:33.658 00:17:33.658 --- 10.0.0.2 ping statistics --- 00:17:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.658 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:33.658 00:17:33.658 --- 10.0.0.1 ping statistics --- 00:17:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.658 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=941566 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 941566 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 941566 ']' 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:33.658 11:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.915 [2024-07-12 11:54:23.162455] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:17:33.915 [2024-07-12 11:54:23.162545] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:33.915 [2024-07-12 11:54:23.238516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.915 [2024-07-12 11:54:23.361725] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.915 [2024-07-12 11:54:23.361790] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.915 [2024-07-12 11:54:23.361806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.915 [2024-07-12 11:54:23.361819] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.915 [2024-07-12 11:54:23.361831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.915 [2024-07-12 11:54:23.361949] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:17:33.916 [2024-07-12 11:54:23.362003] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:17:33.916 [2024-07-12 11:54:23.362057] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:17:33.916 [2024-07-12 11:54:23.362061] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 [2024-07-12 11:54:24.112718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 Malloc0 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.847 [2024-07-12 11:54:24.150418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.847 { 00:17:34.847 "params": { 00:17:34.847 "name": "Nvme$subsystem", 00:17:34.847 "trtype": "$TEST_TRANSPORT", 00:17:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.847 "adrfam": "ipv4", 00:17:34.847 "trsvcid": "$NVMF_PORT", 00:17:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.847 "hdgst": ${hdgst:-false}, 00:17:34.847 "ddgst": ${ddgst:-false} 00:17:34.847 }, 00:17:34.847 "method": "bdev_nvme_attach_controller" 00:17:34.847 } 00:17:34.847 EOF 00:17:34.847 )") 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:34.847 11:54:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.847 "params": { 00:17:34.847 "name": "Nvme1", 00:17:34.847 "trtype": "tcp", 00:17:34.847 "traddr": "10.0.0.2", 00:17:34.847 "adrfam": "ipv4", 00:17:34.847 "trsvcid": "4420", 00:17:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.847 "hdgst": false, 00:17:34.847 "ddgst": false 00:17:34.847 }, 00:17:34.847 "method": "bdev_nvme_attach_controller" 00:17:34.847 }' 00:17:34.847 [2024-07-12 11:54:24.194457] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:17:34.847 [2024-07-12 11:54:24.194549] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid941720 ] 00:17:34.847 [2024-07-12 11:54:24.259417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.105 [2024-07-12 11:54:24.375428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.105 [2024-07-12 11:54:24.375478] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.105 [2024-07-12 11:54:24.375481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.362 I/O targets: 00:17:35.362 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:35.362 00:17:35.362 00:17:35.362 CUnit - A unit testing framework for C - Version 2.1-3 00:17:35.362 http://cunit.sourceforge.net/ 00:17:35.362 00:17:35.362 00:17:35.362 Suite: bdevio tests on: Nvme1n1 00:17:35.362 Test: blockdev write read block ...passed 00:17:35.362 Test: blockdev write zeroes read block ...passed 00:17:35.362 Test: blockdev write zeroes read no split ...passed 00:17:35.362 Test: blockdev write zeroes read split ...passed 00:17:35.618 Test: blockdev write zeroes read split partial ...passed 00:17:35.619 Test: blockdev reset ...[2024-07-12 11:54:24.857347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:35.619 [2024-07-12 11:54:24.857457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9e90 (9): Bad file descriptor 00:17:35.619 [2024-07-12 11:54:24.875367] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:35.619 passed 00:17:35.619 Test: blockdev write read 8 blocks ...passed 00:17:35.619 Test: blockdev write read size > 128k ...passed 00:17:35.619 Test: blockdev write read invalid size ...passed 00:17:35.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.619 Test: blockdev write read max offset ...passed 00:17:35.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:35.619 Test: blockdev writev readv 8 blocks ...passed 00:17:35.619 Test: blockdev writev readv 30 x 1block ...passed 00:17:35.619 Test: blockdev writev readv block ...passed 00:17:35.619 Test: blockdev writev readv size > 128k ...passed 00:17:35.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:35.619 Test: blockdev comparev and writev ...[2024-07-12 11:54:25.088104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.088141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.088165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.088182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.088534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.088559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.088581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.088598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.088948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.088973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.088994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.089010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.089343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.089366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.619 [2024-07-12 11:54:25.089387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.619 [2024-07-12 11:54:25.089403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.877 passed 00:17:35.877 Test: blockdev nvme passthru rw ...passed 00:17:35.877 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:54:25.172142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.877 [2024-07-12 11:54:25.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.877 [2024-07-12 11:54:25.172324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.877 [2024-07-12 11:54:25.172348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.877 [2024-07-12 11:54:25.172485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.877 [2024-07-12 11:54:25.172507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.877 [2024-07-12 11:54:25.172642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.877 [2024-07-12 11:54:25.172664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.877 passed 00:17:35.877 Test: blockdev nvme admin passthru ...passed 00:17:35.877 Test: blockdev copy ...passed 00:17:35.877 00:17:35.877 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.877 suites 1 1 n/a 0 0 00:17:35.877 tests 23 23 23 0 0 00:17:35.877 asserts 152 152 152 0 n/a 00:17:35.877 00:17:35.877 Elapsed time = 1.145 seconds 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.135 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:36.135 rmmod nvme_tcp 00:17:36.393 rmmod nvme_fabrics 00:17:36.393 rmmod nvme_keyring 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 941566 ']' 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 941566 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 941566 ']' 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 941566 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 941566 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 941566' 00:17:36.393 killing process with pid 941566 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 941566 00:17:36.393 11:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 941566 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.653 11:54:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.183 11:54:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.183 00:17:39.183 real 0m7.142s 00:17:39.183 user 0m13.803s 00:17:39.183 sys 0m2.448s 00:17:39.183 11:54:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:39.183 11:54:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.183 ************************************ 00:17:39.183 END TEST nvmf_bdevio_no_huge 00:17:39.183 ************************************ 00:17:39.183 11:54:28 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.183 11:54:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:39.183 11:54:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:39.183 11:54:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.183 ************************************ 00:17:39.183 START TEST nvmf_tls 00:17:39.183 ************************************ 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.183 * Looking for test storage... 00:17:39.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.183 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.184 11:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:41.085 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:41.085 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:41.085 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:41.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:41.085 00:17:41.085 --- 10.0.0.2 ping statistics --- 00:17:41.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.085 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:41.085 00:17:41.085 --- 10.0.0.1 ping statistics --- 00:17:41.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.085 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:41.085 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=943909 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 943909 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 943909 ']' 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:41.086 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.086 [2024-07-12 11:54:30.446653] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:17:41.086 [2024-07-12 11:54:30.446723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.086 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.086 [2024-07-12 11:54:30.510556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.371 [2024-07-12 11:54:30.618837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.371 [2024-07-12 11:54:30.618910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.371 [2024-07-12 11:54:30.618924] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.371 [2024-07-12 11:54:30.618948] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.371 [2024-07-12 11:54:30.618958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.371 [2024-07-12 11:54:30.618985] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:41.371 11:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:41.631 true 00:17:41.631 11:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.631 11:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:41.939 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:41.939 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:41.939 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:41.939 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.939 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:42.196 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:42.196 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:42.196 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:42.452 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.452 11:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:42.708 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:42.708 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:42.708 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.708 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:42.964 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:42.964 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:42.964 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:43.222 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.222 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:43.481 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:43.481 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:43.481 11:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:43.744 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.744 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:44.005 11:54:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.WKA2S5YOdE 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.bTuwrKCJwD 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WKA2S5YOdE 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.bTuwrKCJwD 00:17:44.263 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:44.521 11:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:44.779 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WKA2S5YOdE 00:17:44.779 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WKA2S5YOdE 00:17:44.779 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.037 [2024-07-12 11:54:34.370934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.037 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.294 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.552 [2024-07-12 11:54:34.856256] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.552 [2024-07-12 11:54:34.856509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.552 11:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.808 malloc0 00:17:45.808 11:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.066 11:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WKA2S5YOdE 00:17:46.324 [2024-07-12 11:54:35.601691] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:46.324 11:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WKA2S5YOdE 00:17:46.324 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.294 Initializing NVMe Controllers 00:17:56.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.294 Initialization complete. Launching workers. 00:17:56.294 ======================================================== 00:17:56.294 Latency(us) 00:17:56.294 Device Information : IOPS MiB/s Average min max 00:17:56.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7419.78 28.98 8628.53 1361.04 10111.13 00:17:56.294 ======================================================== 00:17:56.294 Total : 7419.78 28.98 8628.53 1361.04 10111.13 00:17:56.294 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WKA2S5YOdE 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WKA2S5YOdE' 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=945688 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 945688 /var/tmp/bdevperf.sock 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 945688 ']' 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:56.294 11:54:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.294 [2024-07-12 11:54:45.769832] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:17:56.294 [2024-07-12 11:54:45.769938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945688 ] 00:17:56.552 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.552 [2024-07-12 11:54:45.835842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.552 [2024-07-12 11:54:45.948906] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.811 11:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:56.811 11:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:17:56.811 11:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WKA2S5YOdE 00:17:56.811 [2024-07-12 11:54:46.294930] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.811 [2024-07-12 11:54:46.295055] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:57.069 TLSTESTn1 00:17:57.069 11:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.069 Running I/O for 10 seconds... 00:18:09.260 00:18:09.260 Latency(us) 00:18:09.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:09.260 Verification LBA range: start 0x0 length 0x2000 00:18:09.260 TLSTESTn1 : 10.02 3453.25 13.49 0.00 0.00 37006.56 6310.87 40195.41 00:18:09.260 =================================================================================================================== 00:18:09.260 Total : 3453.25 13.49 0.00 0.00 37006.56 6310.87 40195.41 00:18:09.260 0 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 945688 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 945688 ']' 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 945688 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 945688 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 945688' 00:18:09.260 killing process with pid 945688 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 945688 00:18:09.260 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.260 00:18:09.260 Latency(us) 00:18:09.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.260 =================================================================================================================== 00:18:09.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.260 [2024-07-12 11:54:56.586088] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 945688 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bTuwrKCJwD 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bTuwrKCJwD 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bTuwrKCJwD 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bTuwrKCJwD' 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=947008 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 947008 /var/tmp/bdevperf.sock 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947008 ']' 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:09.260 11:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.260 [2024-07-12 11:54:56.900509] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:09.260 [2024-07-12 11:54:56.900589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947008 ] 00:18:09.260 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.260 [2024-07-12 11:54:56.961419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.260 [2024-07-12 11:54:57.068240] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bTuwrKCJwD 00:18:09.260 [2024-07-12 11:54:57.453301] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.260 [2024-07-12 11:54:57.453443] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.260 [2024-07-12 11:54:57.459717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.260 [2024-07-12 11:54:57.460296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b0b0 (107): Transport endpoint is not connected 00:18:09.260 [2024-07-12 11:54:57.461295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b0b0 (9): Bad file descriptor 00:18:09.260 [2024-07-12 11:54:57.462294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.260 [2024-07-12 11:54:57.462313] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.260 [2024-07-12 11:54:57.462344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.260 request: 00:18:09.260 { 00:18:09.260 "name": "TLSTEST", 00:18:09.260 "trtype": "tcp", 00:18:09.260 "traddr": "10.0.0.2", 00:18:09.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.260 "adrfam": "ipv4", 00:18:09.260 "trsvcid": "4420", 00:18:09.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.260 "psk": "/tmp/tmp.bTuwrKCJwD", 00:18:09.260 "method": "bdev_nvme_attach_controller", 00:18:09.260 "req_id": 1 00:18:09.260 } 00:18:09.260 Got JSON-RPC error response 00:18:09.260 response: 00:18:09.260 { 00:18:09.260 "code": -5, 00:18:09.260 "message": "Input/output error" 00:18:09.260 } 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 947008 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947008 ']' 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947008 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947008 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947008' 00:18:09.260 killing process with pid 947008 00:18:09.260 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947008 00:18:09.260 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.260 00:18:09.260 Latency(us) 00:18:09.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.261 =================================================================================================================== 00:18:09.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.261 [2024-07-12 11:54:57.514019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947008 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WKA2S5YOdE 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WKA2S5YOdE 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WKA2S5YOdE 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WKA2S5YOdE' 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=947136 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 947136 /var/tmp/bdevperf.sock 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947136 ']' 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:09.261 11:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.261 [2024-07-12 11:54:57.817481] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:09.261 [2024-07-12 11:54:57.817562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947136 ] 00:18:09.261 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.261 [2024-07-12 11:54:57.876932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.261 [2024-07-12 11:54:57.980072] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WKA2S5YOdE 00:18:09.261 [2024-07-12 11:54:58.361789] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.261 [2024-07-12 11:54:58.361947] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.261 [2024-07-12 11:54:58.371875] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.261 [2024-07-12 11:54:58.371929] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.261 [2024-07-12 11:54:58.371971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.261 [2024-07-12 11:54:58.372872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1b0b0 (107): Transport endpoint is not connected 00:18:09.261 [2024-07-12 11:54:58.373841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1b0b0 (9): Bad file descriptor 00:18:09.261 [2024-07-12 11:54:58.374841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.261 [2024-07-12 11:54:58.374881] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.261 [2024-07-12 11:54:58.374898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.261 request: 00:18:09.261 { 00:18:09.261 "name": "TLSTEST", 00:18:09.261 "trtype": "tcp", 00:18:09.261 "traddr": "10.0.0.2", 00:18:09.261 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:09.261 "adrfam": "ipv4", 00:18:09.261 "trsvcid": "4420", 00:18:09.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.261 "psk": "/tmp/tmp.WKA2S5YOdE", 00:18:09.261 "method": "bdev_nvme_attach_controller", 00:18:09.261 "req_id": 1 00:18:09.261 } 00:18:09.261 Got JSON-RPC error response 00:18:09.261 response: 00:18:09.261 { 00:18:09.261 "code": -5, 00:18:09.261 "message": "Input/output error" 00:18:09.261 } 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 947136 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947136 ']' 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947136 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947136 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947136' 00:18:09.261 killing process with pid 947136 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947136 00:18:09.261 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.261 00:18:09.261 Latency(us) 00:18:09.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.261 =================================================================================================================== 00:18:09.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.261 [2024-07-12 11:54:58.418094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947136 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WKA2S5YOdE 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WKA2S5YOdE 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:09.261 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WKA2S5YOdE 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WKA2S5YOdE' 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=947157 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 947157 /var/tmp/bdevperf.sock 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947157 ']' 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:09.262 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.262 [2024-07-12 11:54:58.692229] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:09.262 [2024-07-12 11:54:58.692313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947157 ] 00:18:09.262 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.262 [2024-07-12 11:54:58.749405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.520 [2024-07-12 11:54:58.867703] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.520 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:09.520 11:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:09.520 11:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WKA2S5YOdE 00:18:09.778 [2024-07-12 11:54:59.192291] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.778 [2024-07-12 11:54:59.192406] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.778 [2024-07-12 11:54:59.204030] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:09.778 [2024-07-12 11:54:59.204069] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:09.778 [2024-07-12 11:54:59.204161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.778 [2024-07-12 11:54:59.205086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d30b0 (107): Transport endpoint is not connected 00:18:09.778 [2024-07-12 11:54:59.206078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d30b0 (9): Bad file descriptor 00:18:09.778 [2024-07-12 11:54:59.207076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:09.778 [2024-07-12 11:54:59.207096] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.778 [2024-07-12 11:54:59.207112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:09.778 request: 00:18:09.778 { 00:18:09.778 "name": "TLSTEST", 00:18:09.778 "trtype": "tcp", 00:18:09.778 "traddr": "10.0.0.2", 00:18:09.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.778 "adrfam": "ipv4", 00:18:09.778 "trsvcid": "4420", 00:18:09.778 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:09.778 "psk": "/tmp/tmp.WKA2S5YOdE", 00:18:09.778 "method": "bdev_nvme_attach_controller", 00:18:09.778 "req_id": 1 00:18:09.778 } 00:18:09.778 Got JSON-RPC error response 00:18:09.778 response: 00:18:09.778 { 00:18:09.778 "code": -5, 00:18:09.778 "message": "Input/output error" 00:18:09.778 } 00:18:09.778 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 947157 00:18:09.778 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947157 ']' 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947157 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947157 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947157' 00:18:09.779 killing process with pid 947157 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947157 00:18:09.779 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.779 00:18:09.779 Latency(us) 00:18:09.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.779 =================================================================================================================== 00:18:09.779 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.779 [2024-07-12 11:54:59.255136] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.779 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947157 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=947296 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 947296 /var/tmp/bdevperf.sock 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947296 ']' 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:10.037 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 [2024-07-12 11:54:59.561502] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:10.295 [2024-07-12 11:54:59.561578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947296 ] 00:18:10.295 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.295 [2024-07-12 11:54:59.618511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.295 [2024-07-12 11:54:59.720594] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.552 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:10.552 11:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:10.552 11:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:10.810 [2024-07-12 11:55:00.106502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.810 [2024-07-12 11:55:00.108556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f0a10 (9): Bad file descriptor 00:18:10.810 [2024-07-12 11:55:00.109552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.810 [2024-07-12 11:55:00.109572] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.810 [2024-07-12 11:55:00.109604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.810 request: 00:18:10.810 { 00:18:10.810 "name": "TLSTEST", 00:18:10.810 "trtype": "tcp", 00:18:10.810 "traddr": "10.0.0.2", 00:18:10.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.810 "adrfam": "ipv4", 00:18:10.810 "trsvcid": "4420", 00:18:10.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.810 "method": "bdev_nvme_attach_controller", 00:18:10.810 "req_id": 1 00:18:10.810 } 00:18:10.810 Got JSON-RPC error response 00:18:10.810 response: 00:18:10.810 { 00:18:10.810 "code": -5, 00:18:10.810 "message": "Input/output error" 00:18:10.810 } 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 947296 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947296 ']' 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947296 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947296 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947296' 00:18:10.810 killing process with pid 947296 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947296 00:18:10.810 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.810 00:18:10.810 Latency(us) 00:18:10.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.810 =================================================================================================================== 00:18:10.810 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.810 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947296 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 943909 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 943909 ']' 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 943909 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 943909 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 943909' 00:18:11.068 killing process with pid 943909 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 943909 00:18:11.068 [2024-07-12 11:55:00.454100] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:11.068 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 943909 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.zt4jvl7nly 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.zt4jvl7nly 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=947446 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 947446 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947446 ']' 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:11.326 11:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 [2024-07-12 11:55:00.849324] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:11.585 [2024-07-12 11:55:00.849414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.585 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.585 [2024-07-12 11:55:00.917779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.585 [2024-07-12 11:55:01.033682] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.585 [2024-07-12 11:55:01.033745] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.585 [2024-07-12 11:55:01.033761] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.585 [2024-07-12 11:55:01.033775] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.585 [2024-07-12 11:55:01.033787] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.585 [2024-07-12 11:55:01.033824] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.545 11:55:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:12.545 11:55:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:12.545 11:55:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.545 11:55:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:12.545 11:55:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.546 11:55:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.546 11:55:01 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:12.546 11:55:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zt4jvl7nly 00:18:12.546 11:55:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.804 [2024-07-12 11:55:02.106000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.804 11:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:13.062 11:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:13.319 [2024-07-12 11:55:02.623389] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.319 [2024-07-12 11:55:02.623619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.319 11:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:13.577 malloc0 00:18:13.577 11:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.834 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:14.091 [2024-07-12 11:55:03.493909] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zt4jvl7nly 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zt4jvl7nly' 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=947864 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 947864 /var/tmp/bdevperf.sock 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 947864 ']' 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:14.091 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.091 [2024-07-12 11:55:03.551786] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:14.091 [2024-07-12 11:55:03.551856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947864 ] 00:18:14.091 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.349 [2024-07-12 11:55:03.610635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.349 [2024-07-12 11:55:03.718451] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.349 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:14.349 11:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:14.349 11:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:14.606 [2024-07-12 11:55:04.037957] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.607 [2024-07-12 11:55:04.038073] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.863 TLSTESTn1 00:18:14.863 11:55:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.863 Running I/O for 10 seconds... 00:18:24.825 00:18:24.825 Latency(us) 00:18:24.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.825 Verification LBA range: start 0x0 length 0x2000 00:18:24.825 TLSTESTn1 : 10.02 3194.81 12.48 0.00 0.00 39995.22 6893.42 38447.79 00:18:24.825 =================================================================================================================== 00:18:24.825 Total : 3194.81 12.48 0.00 0.00 39995.22 6893.42 38447.79 00:18:24.825 0 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 947864 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947864 ']' 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947864 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947864 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947864' 00:18:24.825 killing process with pid 947864 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947864 00:18:24.825 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.825 00:18:24.825 Latency(us) 00:18:24.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.825 =================================================================================================================== 00:18:24.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.825 [2024-07-12 11:55:14.300662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:24.825 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947864 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.zt4jvl7nly 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zt4jvl7nly 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zt4jvl7nly 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zt4jvl7nly 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zt4jvl7nly' 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.083 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=949061 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 949061 /var/tmp/bdevperf.sock 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 949061 ']' 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:25.341 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.341 [2024-07-12 11:55:14.618441] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:25.341 [2024-07-12 11:55:14.618519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949061 ] 00:18:25.341 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.341 [2024-07-12 11:55:14.675642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.341 [2024-07-12 11:55:14.779374] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.600 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:25.600 11:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:25.600 11:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:25.858 [2024-07-12 11:55:15.116590] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.858 [2024-07-12 11:55:15.116673] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:25.858 [2024-07-12 11:55:15.116687] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.zt4jvl7nly 00:18:25.858 request: 00:18:25.858 { 00:18:25.858 "name": "TLSTEST", 00:18:25.858 "trtype": "tcp", 00:18:25.858 "traddr": "10.0.0.2", 00:18:25.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.858 "adrfam": "ipv4", 00:18:25.858 "trsvcid": "4420", 00:18:25.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.858 "psk": "/tmp/tmp.zt4jvl7nly", 00:18:25.858 "method": "bdev_nvme_attach_controller", 00:18:25.858 "req_id": 1 00:18:25.858 } 00:18:25.858 Got JSON-RPC error response 00:18:25.858 response: 00:18:25.858 { 00:18:25.858 "code": -1, 00:18:25.858 "message": "Operation not permitted" 00:18:25.858 } 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 949061 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 949061 ']' 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 949061 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 949061 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 949061' 00:18:25.858 killing process with pid 949061 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 949061 00:18:25.858 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.858 00:18:25.858 Latency(us) 00:18:25.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.858 =================================================================================================================== 00:18:25.858 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.858 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 949061 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 947446 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 947446 ']' 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 947446 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947446 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947446' 00:18:26.116 killing process with pid 947446 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 947446 00:18:26.116 [2024-07-12 11:55:15.463831] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:26.116 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 947446 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=949205 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 949205 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 949205 ']' 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:26.375 11:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 [2024-07-12 11:55:15.799796] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:26.375 [2024-07-12 11:55:15.799895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.375 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.375 [2024-07-12 11:55:15.865326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.633 [2024-07-12 11:55:15.974720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.633 [2024-07-12 11:55:15.974787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.633 [2024-07-12 11:55:15.974816] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.633 [2024-07-12 11:55:15.974828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.633 [2024-07-12 11:55:15.974838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.633 [2024-07-12 11:55:15.974872] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zt4jvl7nly 00:18:26.633 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:26.890 [2024-07-12 11:55:16.335466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.890 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.456 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:27.456 [2024-07-12 11:55:16.917003] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.456 [2024-07-12 11:55:16.917228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.456 11:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.022 malloc0 00:18:28.023 11:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.023 11:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:28.281 [2024-07-12 11:55:17.754305] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:28.281 [2024-07-12 11:55:17.754353] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:28.281 [2024-07-12 11:55:17.754392] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:28.281 request: 00:18:28.281 { 00:18:28.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.281 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.281 "psk": "/tmp/tmp.zt4jvl7nly", 00:18:28.281 "method": "nvmf_subsystem_add_host", 00:18:28.281 "req_id": 1 00:18:28.281 } 00:18:28.281 Got JSON-RPC error response 00:18:28.281 response: 00:18:28.281 { 00:18:28.281 "code": -32603, 00:18:28.281 "message": "Internal error" 00:18:28.281 } 00:18:28.281 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:28.281 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 949205 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 949205 ']' 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 949205 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 949205 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 949205' 00:18:28.539 killing process with pid 949205 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 949205 00:18:28.539 11:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 949205 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.zt4jvl7nly 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=949541 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 949541 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 949541 ']' 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:28.796 11:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.796 [2024-07-12 11:55:18.167996] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:28.796 [2024-07-12 11:55:18.168078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.796 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.796 [2024-07-12 11:55:18.236252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.054 [2024-07-12 11:55:18.352423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.054 [2024-07-12 11:55:18.352490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.054 [2024-07-12 11:55:18.352507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.054 [2024-07-12 11:55:18.352521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.054 [2024-07-12 11:55:18.352532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.054 [2024-07-12 11:55:18.352563] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.620 11:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:29.620 11:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:29.620 11:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.620 11:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:29.620 11:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.878 11:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.878 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:29.878 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zt4jvl7nly 00:18:29.878 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.878 [2024-07-12 11:55:19.352591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.878 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.136 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.394 [2024-07-12 11:55:19.849967] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.394 [2024-07-12 11:55:19.850238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.394 11:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.652 malloc0 00:18:30.652 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.910 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:31.168 [2024-07-12 11:55:20.613202] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=949914 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 949914 /var/tmp/bdevperf.sock 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 949914 ']' 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:31.168 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.427 [2024-07-12 11:55:20.674016] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:31.427 [2024-07-12 11:55:20.674098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949914 ] 00:18:31.427 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.427 [2024-07-12 11:55:20.735326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.427 [2024-07-12 11:55:20.844313] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.685 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:31.685 11:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:31.685 11:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:31.944 [2024-07-12 11:55:21.235402] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.944 [2024-07-12 11:55:21.235527] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:31.944 TLSTESTn1 00:18:31.944 11:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:32.202 11:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:32.202 "subsystems": [ 00:18:32.202 { 00:18:32.202 "subsystem": "keyring", 00:18:32.202 "config": [] 00:18:32.202 }, 00:18:32.202 { 00:18:32.202 "subsystem": "iobuf", 00:18:32.202 "config": [ 00:18:32.202 { 00:18:32.202 "method": "iobuf_set_options", 00:18:32.202 "params": { 00:18:32.202 "small_pool_count": 8192, 00:18:32.202 "large_pool_count": 1024, 00:18:32.202 "small_bufsize": 8192, 00:18:32.202 "large_bufsize": 135168 00:18:32.202 } 00:18:32.202 } 00:18:32.202 ] 00:18:32.202 }, 00:18:32.202 { 00:18:32.202 "subsystem": "sock", 00:18:32.202 "config": [ 00:18:32.202 { 00:18:32.202 "method": "sock_set_default_impl", 00:18:32.202 "params": { 00:18:32.202 "impl_name": "posix" 00:18:32.202 } 00:18:32.202 }, 00:18:32.202 { 00:18:32.202 "method": "sock_impl_set_options", 00:18:32.202 "params": { 00:18:32.202 "impl_name": "ssl", 00:18:32.202 "recv_buf_size": 4096, 00:18:32.202 "send_buf_size": 4096, 00:18:32.202 "enable_recv_pipe": true, 00:18:32.202 "enable_quickack": false, 00:18:32.202 "enable_placement_id": 0, 00:18:32.202 "enable_zerocopy_send_server": true, 00:18:32.202 "enable_zerocopy_send_client": false, 00:18:32.202 "zerocopy_threshold": 0, 00:18:32.202 "tls_version": 0, 00:18:32.203 "enable_ktls": false 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "sock_impl_set_options", 00:18:32.203 "params": { 00:18:32.203 "impl_name": "posix", 00:18:32.203 "recv_buf_size": 2097152, 00:18:32.203 "send_buf_size": 2097152, 00:18:32.203 "enable_recv_pipe": true, 00:18:32.203 "enable_quickack": false, 00:18:32.203 "enable_placement_id": 0, 00:18:32.203 "enable_zerocopy_send_server": true, 00:18:32.203 "enable_zerocopy_send_client": false, 00:18:32.203 "zerocopy_threshold": 0, 00:18:32.203 "tls_version": 0, 00:18:32.203 "enable_ktls": false 00:18:32.203 } 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "vmd", 00:18:32.203 "config": [] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "accel", 00:18:32.203 "config": [ 00:18:32.203 { 00:18:32.203 "method": "accel_set_options", 00:18:32.203 "params": { 00:18:32.203 "small_cache_size": 128, 00:18:32.203 "large_cache_size": 16, 00:18:32.203 "task_count": 2048, 00:18:32.203 "sequence_count": 2048, 00:18:32.203 "buf_count": 2048 00:18:32.203 } 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "bdev", 00:18:32.203 "config": [ 00:18:32.203 { 00:18:32.203 "method": "bdev_set_options", 00:18:32.203 "params": { 00:18:32.203 "bdev_io_pool_size": 65535, 00:18:32.203 "bdev_io_cache_size": 256, 00:18:32.203 "bdev_auto_examine": true, 00:18:32.203 "iobuf_small_cache_size": 128, 00:18:32.203 "iobuf_large_cache_size": 16 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_raid_set_options", 00:18:32.203 "params": { 00:18:32.203 "process_window_size_kb": 1024 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_iscsi_set_options", 00:18:32.203 "params": { 00:18:32.203 "timeout_sec": 30 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_nvme_set_options", 00:18:32.203 "params": { 00:18:32.203 "action_on_timeout": "none", 00:18:32.203 "timeout_us": 0, 00:18:32.203 "timeout_admin_us": 0, 00:18:32.203 "keep_alive_timeout_ms": 10000, 00:18:32.203 "arbitration_burst": 0, 00:18:32.203 "low_priority_weight": 0, 00:18:32.203 "medium_priority_weight": 0, 00:18:32.203 "high_priority_weight": 0, 00:18:32.203 "nvme_adminq_poll_period_us": 10000, 00:18:32.203 "nvme_ioq_poll_period_us": 0, 00:18:32.203 "io_queue_requests": 0, 00:18:32.203 "delay_cmd_submit": true, 00:18:32.203 "transport_retry_count": 4, 00:18:32.203 "bdev_retry_count": 3, 00:18:32.203 "transport_ack_timeout": 0, 00:18:32.203 "ctrlr_loss_timeout_sec": 0, 00:18:32.203 "reconnect_delay_sec": 0, 00:18:32.203 "fast_io_fail_timeout_sec": 0, 00:18:32.203 "disable_auto_failback": false, 00:18:32.203 "generate_uuids": false, 00:18:32.203 "transport_tos": 0, 00:18:32.203 "nvme_error_stat": false, 00:18:32.203 "rdma_srq_size": 0, 00:18:32.203 "io_path_stat": false, 00:18:32.203 "allow_accel_sequence": false, 00:18:32.203 "rdma_max_cq_size": 0, 00:18:32.203 "rdma_cm_event_timeout_ms": 0, 00:18:32.203 "dhchap_digests": [ 00:18:32.203 "sha256", 00:18:32.203 "sha384", 00:18:32.203 "sha512" 00:18:32.203 ], 00:18:32.203 "dhchap_dhgroups": [ 00:18:32.203 "null", 00:18:32.203 "ffdhe2048", 00:18:32.203 "ffdhe3072", 00:18:32.203 "ffdhe4096", 00:18:32.203 "ffdhe6144", 00:18:32.203 "ffdhe8192" 00:18:32.203 ] 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_nvme_set_hotplug", 00:18:32.203 "params": { 00:18:32.203 "period_us": 100000, 00:18:32.203 "enable": false 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_malloc_create", 00:18:32.203 "params": { 00:18:32.203 "name": "malloc0", 00:18:32.203 "num_blocks": 8192, 00:18:32.203 "block_size": 4096, 00:18:32.203 "physical_block_size": 4096, 00:18:32.203 "uuid": "33052cf4-e729-4d7e-a814-d24e01873681", 00:18:32.203 "optimal_io_boundary": 0 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "bdev_wait_for_examine" 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "nbd", 00:18:32.203 "config": [] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "scheduler", 00:18:32.203 "config": [ 00:18:32.203 { 00:18:32.203 "method": "framework_set_scheduler", 00:18:32.203 "params": { 00:18:32.203 "name": "static" 00:18:32.203 } 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "subsystem": "nvmf", 00:18:32.203 "config": [ 00:18:32.203 { 00:18:32.203 "method": "nvmf_set_config", 00:18:32.203 "params": { 00:18:32.203 "discovery_filter": "match_any", 00:18:32.203 "admin_cmd_passthru": { 00:18:32.203 "identify_ctrlr": false 00:18:32.203 } 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_set_max_subsystems", 00:18:32.203 "params": { 00:18:32.203 "max_subsystems": 1024 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_set_crdt", 00:18:32.203 "params": { 00:18:32.203 "crdt1": 0, 00:18:32.203 "crdt2": 0, 00:18:32.203 "crdt3": 0 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_create_transport", 00:18:32.203 "params": { 00:18:32.203 "trtype": "TCP", 00:18:32.203 "max_queue_depth": 128, 00:18:32.203 "max_io_qpairs_per_ctrlr": 127, 00:18:32.203 "in_capsule_data_size": 4096, 00:18:32.203 "max_io_size": 131072, 00:18:32.203 "io_unit_size": 131072, 00:18:32.203 "max_aq_depth": 128, 00:18:32.203 "num_shared_buffers": 511, 00:18:32.203 "buf_cache_size": 4294967295, 00:18:32.203 "dif_insert_or_strip": false, 00:18:32.203 "zcopy": false, 00:18:32.203 "c2h_success": false, 00:18:32.203 "sock_priority": 0, 00:18:32.203 "abort_timeout_sec": 1, 00:18:32.203 "ack_timeout": 0, 00:18:32.203 "data_wr_pool_size": 0 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_create_subsystem", 00:18:32.203 "params": { 00:18:32.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.203 "allow_any_host": false, 00:18:32.203 "serial_number": "SPDK00000000000001", 00:18:32.203 "model_number": "SPDK bdev Controller", 00:18:32.203 "max_namespaces": 10, 00:18:32.203 "min_cntlid": 1, 00:18:32.203 "max_cntlid": 65519, 00:18:32.203 "ana_reporting": false 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_subsystem_add_host", 00:18:32.203 "params": { 00:18:32.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.203 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.203 "psk": "/tmp/tmp.zt4jvl7nly" 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_subsystem_add_ns", 00:18:32.203 "params": { 00:18:32.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.203 "namespace": { 00:18:32.203 "nsid": 1, 00:18:32.203 "bdev_name": "malloc0", 00:18:32.203 "nguid": "33052CF4E7294D7EA814D24E01873681", 00:18:32.203 "uuid": "33052cf4-e729-4d7e-a814-d24e01873681", 00:18:32.203 "no_auto_visible": false 00:18:32.203 } 00:18:32.203 } 00:18:32.203 }, 00:18:32.203 { 00:18:32.203 "method": "nvmf_subsystem_add_listener", 00:18:32.203 "params": { 00:18:32.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.203 "listen_address": { 00:18:32.203 "trtype": "TCP", 00:18:32.203 "adrfam": "IPv4", 00:18:32.203 "traddr": "10.0.0.2", 00:18:32.203 "trsvcid": "4420" 00:18:32.203 }, 00:18:32.203 "secure_channel": true 00:18:32.203 } 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 } 00:18:32.203 ] 00:18:32.203 }' 00:18:32.203 11:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:32.770 11:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:32.770 "subsystems": [ 00:18:32.770 { 00:18:32.770 "subsystem": "keyring", 00:18:32.770 "config": [] 00:18:32.770 }, 00:18:32.770 { 00:18:32.770 "subsystem": "iobuf", 00:18:32.770 "config": [ 00:18:32.770 { 00:18:32.770 "method": "iobuf_set_options", 00:18:32.770 "params": { 00:18:32.770 "small_pool_count": 8192, 00:18:32.770 "large_pool_count": 1024, 00:18:32.770 "small_bufsize": 8192, 00:18:32.770 "large_bufsize": 135168 00:18:32.770 } 00:18:32.770 } 00:18:32.770 ] 00:18:32.770 }, 00:18:32.771 { 00:18:32.771 "subsystem": "sock", 00:18:32.771 "config": [ 00:18:32.771 { 00:18:32.771 "method": "sock_set_default_impl", 00:18:32.771 "params": { 00:18:32.771 "impl_name": "posix" 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "sock_impl_set_options", 00:18:32.771 "params": { 00:18:32.771 "impl_name": "ssl", 00:18:32.771 "recv_buf_size": 4096, 00:18:32.771 "send_buf_size": 4096, 00:18:32.771 "enable_recv_pipe": true, 00:18:32.771 "enable_quickack": false, 00:18:32.771 "enable_placement_id": 0, 00:18:32.771 "enable_zerocopy_send_server": true, 00:18:32.771 "enable_zerocopy_send_client": false, 00:18:32.771 "zerocopy_threshold": 0, 00:18:32.771 "tls_version": 0, 00:18:32.771 "enable_ktls": false 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "sock_impl_set_options", 00:18:32.771 "params": { 00:18:32.771 "impl_name": "posix", 00:18:32.771 "recv_buf_size": 2097152, 00:18:32.771 "send_buf_size": 2097152, 00:18:32.771 "enable_recv_pipe": true, 00:18:32.771 "enable_quickack": false, 00:18:32.771 "enable_placement_id": 0, 00:18:32.771 "enable_zerocopy_send_server": true, 00:18:32.771 "enable_zerocopy_send_client": false, 00:18:32.771 "zerocopy_threshold": 0, 00:18:32.771 "tls_version": 0, 00:18:32.771 "enable_ktls": false 00:18:32.771 } 00:18:32.771 } 00:18:32.771 ] 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "subsystem": "vmd", 00:18:32.771 "config": [] 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "subsystem": "accel", 00:18:32.771 "config": [ 00:18:32.771 { 00:18:32.771 "method": "accel_set_options", 00:18:32.771 "params": { 00:18:32.771 "small_cache_size": 128, 00:18:32.771 "large_cache_size": 16, 00:18:32.771 "task_count": 2048, 00:18:32.771 "sequence_count": 2048, 00:18:32.771 "buf_count": 2048 00:18:32.771 } 00:18:32.771 } 00:18:32.771 ] 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "subsystem": "bdev", 00:18:32.771 "config": [ 00:18:32.771 { 00:18:32.771 "method": "bdev_set_options", 00:18:32.771 "params": { 00:18:32.771 "bdev_io_pool_size": 65535, 00:18:32.771 "bdev_io_cache_size": 256, 00:18:32.771 "bdev_auto_examine": true, 00:18:32.771 "iobuf_small_cache_size": 128, 00:18:32.771 "iobuf_large_cache_size": 16 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_raid_set_options", 00:18:32.771 "params": { 00:18:32.771 "process_window_size_kb": 1024 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_iscsi_set_options", 00:18:32.771 "params": { 00:18:32.771 "timeout_sec": 30 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_nvme_set_options", 00:18:32.771 "params": { 00:18:32.771 "action_on_timeout": "none", 00:18:32.771 "timeout_us": 0, 00:18:32.771 "timeout_admin_us": 0, 00:18:32.771 "keep_alive_timeout_ms": 10000, 00:18:32.771 "arbitration_burst": 0, 00:18:32.771 "low_priority_weight": 0, 00:18:32.771 "medium_priority_weight": 0, 00:18:32.771 "high_priority_weight": 0, 00:18:32.771 "nvme_adminq_poll_period_us": 10000, 00:18:32.771 "nvme_ioq_poll_period_us": 0, 00:18:32.771 "io_queue_requests": 512, 00:18:32.771 "delay_cmd_submit": true, 00:18:32.771 "transport_retry_count": 4, 00:18:32.771 "bdev_retry_count": 3, 00:18:32.771 "transport_ack_timeout": 0, 00:18:32.771 "ctrlr_loss_timeout_sec": 0, 00:18:32.771 "reconnect_delay_sec": 0, 00:18:32.771 "fast_io_fail_timeout_sec": 0, 00:18:32.771 "disable_auto_failback": false, 00:18:32.771 "generate_uuids": false, 00:18:32.771 "transport_tos": 0, 00:18:32.771 "nvme_error_stat": false, 00:18:32.771 "rdma_srq_size": 0, 00:18:32.771 "io_path_stat": false, 00:18:32.771 "allow_accel_sequence": false, 00:18:32.771 "rdma_max_cq_size": 0, 00:18:32.771 "rdma_cm_event_timeout_ms": 0, 00:18:32.771 "dhchap_digests": [ 00:18:32.771 "sha256", 00:18:32.771 "sha384", 00:18:32.771 "sha512" 00:18:32.771 ], 00:18:32.771 "dhchap_dhgroups": [ 00:18:32.771 "null", 00:18:32.771 "ffdhe2048", 00:18:32.771 "ffdhe3072", 00:18:32.771 "ffdhe4096", 00:18:32.771 "ffdhe6144", 00:18:32.771 "ffdhe8192" 00:18:32.771 ] 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_nvme_attach_controller", 00:18:32.771 "params": { 00:18:32.771 "name": "TLSTEST", 00:18:32.771 "trtype": "TCP", 00:18:32.771 "adrfam": "IPv4", 00:18:32.771 "traddr": "10.0.0.2", 00:18:32.771 "trsvcid": "4420", 00:18:32.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.771 "prchk_reftag": false, 00:18:32.771 "prchk_guard": false, 00:18:32.771 "ctrlr_loss_timeout_sec": 0, 00:18:32.771 "reconnect_delay_sec": 0, 00:18:32.771 "fast_io_fail_timeout_sec": 0, 00:18:32.771 "psk": "/tmp/tmp.zt4jvl7nly", 00:18:32.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.771 "hdgst": false, 00:18:32.771 "ddgst": false 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_nvme_set_hotplug", 00:18:32.771 "params": { 00:18:32.771 "period_us": 100000, 00:18:32.771 "enable": false 00:18:32.771 } 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "method": "bdev_wait_for_examine" 00:18:32.771 } 00:18:32.771 ] 00:18:32.771 }, 00:18:32.771 { 00:18:32.771 "subsystem": "nbd", 00:18:32.771 "config": [] 00:18:32.771 } 00:18:32.771 ] 00:18:32.771 }' 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 949914 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 949914 ']' 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 949914 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 949914 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 949914' 00:18:32.771 killing process with pid 949914 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 949914 00:18:32.771 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.771 00:18:32.771 Latency(us) 00:18:32.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.771 =================================================================================================================== 00:18:32.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.771 [2024-07-12 11:55:22.039362] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:32.771 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 949914 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 949541 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 949541 ']' 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 949541 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 949541 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 949541' 00:18:33.029 killing process with pid 949541 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 949541 00:18:33.029 [2024-07-12 11:55:22.341053] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:33.029 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 949541 00:18:33.286 11:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:33.286 11:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.286 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:33.286 11:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:33.286 "subsystems": [ 00:18:33.286 { 00:18:33.286 "subsystem": "keyring", 00:18:33.286 "config": [] 00:18:33.286 }, 00:18:33.286 { 00:18:33.286 "subsystem": "iobuf", 00:18:33.286 "config": [ 00:18:33.286 { 00:18:33.286 "method": "iobuf_set_options", 00:18:33.286 "params": { 00:18:33.286 "small_pool_count": 8192, 00:18:33.286 "large_pool_count": 1024, 00:18:33.286 "small_bufsize": 8192, 00:18:33.286 "large_bufsize": 135168 00:18:33.286 } 00:18:33.286 } 00:18:33.286 ] 00:18:33.286 }, 00:18:33.286 { 00:18:33.286 "subsystem": "sock", 00:18:33.286 "config": [ 00:18:33.286 { 00:18:33.286 "method": "sock_set_default_impl", 00:18:33.286 "params": { 00:18:33.286 "impl_name": "posix" 00:18:33.286 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "sock_impl_set_options", 00:18:33.287 "params": { 00:18:33.287 "impl_name": "ssl", 00:18:33.287 "recv_buf_size": 4096, 00:18:33.287 "send_buf_size": 4096, 00:18:33.287 "enable_recv_pipe": true, 00:18:33.287 "enable_quickack": false, 00:18:33.287 "enable_placement_id": 0, 00:18:33.287 "enable_zerocopy_send_server": true, 00:18:33.287 "enable_zerocopy_send_client": false, 00:18:33.287 "zerocopy_threshold": 0, 00:18:33.287 "tls_version": 0, 00:18:33.287 "enable_ktls": false 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "sock_impl_set_options", 00:18:33.287 "params": { 00:18:33.287 "impl_name": "posix", 00:18:33.287 "recv_buf_size": 2097152, 00:18:33.287 "send_buf_size": 2097152, 00:18:33.287 "enable_recv_pipe": true, 00:18:33.287 "enable_quickack": false, 00:18:33.287 "enable_placement_id": 0, 00:18:33.287 "enable_zerocopy_send_server": true, 00:18:33.287 "enable_zerocopy_send_client": false, 00:18:33.287 "zerocopy_threshold": 0, 00:18:33.287 "tls_version": 0, 00:18:33.287 "enable_ktls": false 00:18:33.287 } 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "vmd", 00:18:33.287 "config": [] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "accel", 00:18:33.287 "config": [ 00:18:33.287 { 00:18:33.287 "method": "accel_set_options", 00:18:33.287 "params": { 00:18:33.287 "small_cache_size": 128, 00:18:33.287 "large_cache_size": 16, 00:18:33.287 "task_count": 2048, 00:18:33.287 "sequence_count": 2048, 00:18:33.287 "buf_count": 2048 00:18:33.287 } 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "bdev", 00:18:33.287 "config": [ 00:18:33.287 { 00:18:33.287 "method": "bdev_set_options", 00:18:33.287 "params": { 00:18:33.287 "bdev_io_pool_size": 65535, 00:18:33.287 "bdev_io_cache_size": 256, 00:18:33.287 "bdev_auto_examine": true, 00:18:33.287 "iobuf_small_cache_size": 128, 00:18:33.287 "iobuf_large_cache_size": 16 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_raid_set_options", 00:18:33.287 "params": { 00:18:33.287 "process_window_size_kb": 1024 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_iscsi_set_options", 00:18:33.287 "params": { 00:18:33.287 "timeout_sec": 30 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_nvme_set_options", 00:18:33.287 "params": { 00:18:33.287 "action_on_timeout": "none", 00:18:33.287 "timeout_us": 0, 00:18:33.287 "timeout_admin_us": 0, 00:18:33.287 "keep_alive_timeout_ms": 10000, 00:18:33.287 "arbitration_burst": 0, 00:18:33.287 "low_priority_weight": 0, 00:18:33.287 "medium_priority_weight": 0, 00:18:33.287 "high_priority_weight": 0, 00:18:33.287 "nvme_adminq_poll_period_us": 10000, 00:18:33.287 "nvme_ioq_poll_period_us": 0, 00:18:33.287 "io_queue_requests": 0, 00:18:33.287 "delay_cmd_submit": true, 00:18:33.287 "transport_retry_count": 4, 00:18:33.287 "bdev_retry_count": 3, 00:18:33.287 "transport_ack_timeout": 0, 00:18:33.287 "ctrlr_loss_timeout_sec": 0, 00:18:33.287 "reconnect_delay_sec": 0, 00:18:33.287 "fast_io_fail_timeout_sec": 0, 00:18:33.287 "disable_auto_failback": false, 00:18:33.287 "generate_uuids": false, 00:18:33.287 "transport_tos": 0, 00:18:33.287 "nvme_error_stat": false, 00:18:33.287 "rdma_srq_size": 0, 00:18:33.287 "io_path_stat": false, 00:18:33.287 "allow_accel_sequence": false, 00:18:33.287 "rdma_max_cq_size": 0, 00:18:33.287 "rdma_cm_event_timeout_ms": 0, 00:18:33.287 "dhchap_digests": [ 00:18:33.287 "sha256", 00:18:33.287 "sha384", 00:18:33.287 "sha512" 00:18:33.287 ], 00:18:33.287 "dhchap_dhgroups": [ 00:18:33.287 "null", 00:18:33.287 "ffdhe2048", 00:18:33.287 "ffdhe3072", 00:18:33.287 "ffdhe4096", 00:18:33.287 "ffdhe6144", 00:18:33.287 "ffdhe8192" 00:18:33.287 ] 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_nvme_set_hotplug", 00:18:33.287 "params": { 00:18:33.287 "period_us": 100000, 00:18:33.287 "enable": false 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_malloc_create", 00:18:33.287 "params": { 00:18:33.287 "name": "malloc0", 00:18:33.287 "num_blocks": 8192, 00:18:33.287 "block_size": 4096, 00:18:33.287 "physical_block_size": 4096, 00:18:33.287 "uuid": "33052cf4-e729-4d7e-a814-d24e01873681", 00:18:33.287 "optimal_io_boundary": 0 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "bdev_wait_for_examine" 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "nbd", 00:18:33.287 "config": [] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "scheduler", 00:18:33.287 "config": [ 00:18:33.287 { 00:18:33.287 "method": "framework_set_scheduler", 00:18:33.287 "params": { 00:18:33.287 "name": "static" 00:18:33.287 } 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "subsystem": "nvmf", 00:18:33.287 "config": [ 00:18:33.287 { 00:18:33.287 "method": "nvmf_set_config", 00:18:33.287 "params": { 00:18:33.287 "discovery_filter": "match_any", 00:18:33.287 "admin_cmd_passthru": { 00:18:33.287 "identify_ctrlr": false 00:18:33.287 } 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_set_max_subsystems", 00:18:33.287 "params": { 00:18:33.287 "max_subsystems": 1024 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_set_crdt", 00:18:33.287 "params": { 00:18:33.287 "crdt1": 0, 00:18:33.287 "crdt2": 0, 00:18:33.287 "crdt3": 0 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_create_transport", 00:18:33.287 "params": { 00:18:33.287 "trtype": "TCP", 00:18:33.287 "max_queue_depth": 128, 00:18:33.287 "max_io_qpairs_per_ctrlr": 127, 00:18:33.287 "in_capsule_data_size": 4096, 00:18:33.287 "max_io_size": 131072, 00:18:33.287 "io_unit_size": 131072, 00:18:33.287 "max_aq_depth": 128, 00:18:33.287 "num_shared_buffers": 511, 00:18:33.287 "buf_cache_size": 4294967295, 00:18:33.287 "dif_insert_or_strip": false, 00:18:33.287 "zcopy": false, 00:18:33.287 "c2h_success": false, 00:18:33.287 "sock_priority": 0, 00:18:33.287 "abort_timeout_sec": 1, 00:18:33.287 "ack_timeout": 0, 00:18:33.287 "data_wr_pool_size": 0 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_create_subsystem", 00:18:33.287 "params": { 00:18:33.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.287 "allow_any_host": false, 00:18:33.287 "serial_number": "SPDK00000000000001", 00:18:33.287 "model_number": "SPDK bdev Controller", 00:18:33.287 "max_namespaces": 10, 00:18:33.287 "min_cntlid": 1, 00:18:33.287 "max_cntlid": 65519, 00:18:33.287 "ana_reporting": false 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_subsystem_add_host", 00:18:33.287 "params": { 00:18:33.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.287 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.287 "psk": "/tmp/tmp.zt4jvl7nly" 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_subsystem_add_ns", 00:18:33.287 "params": { 00:18:33.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.287 "namespace": { 00:18:33.287 "nsid": 1, 00:18:33.287 "bdev_name": "malloc0", 00:18:33.287 "nguid": "33052CF4E7294D7EA814D24E01873681", 00:18:33.287 "uuid": "33052cf4-e729-4d7e-a814-d24e01873681", 00:18:33.287 "no_auto_visible": false 00:18:33.287 } 00:18:33.287 } 00:18:33.287 }, 00:18:33.287 { 00:18:33.287 "method": "nvmf_subsystem_add_listener", 00:18:33.287 "params": { 00:18:33.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.287 "listen_address": { 00:18:33.287 "trtype": "TCP", 00:18:33.287 "adrfam": "IPv4", 00:18:33.287 "traddr": "10.0.0.2", 00:18:33.287 "trsvcid": "4420" 00:18:33.287 }, 00:18:33.287 "secure_channel": true 00:18:33.287 } 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 } 00:18:33.287 ] 00:18:33.287 }' 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=950087 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 950087 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 950087 ']' 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:33.287 11:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.287 [2024-07-12 11:55:22.677908] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:33.287 [2024-07-12 11:55:22.677990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.287 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.287 [2024-07-12 11:55:22.742462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.544 [2024-07-12 11:55:22.848909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.544 [2024-07-12 11:55:22.848977] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.544 [2024-07-12 11:55:22.849005] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.544 [2024-07-12 11:55:22.849018] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.544 [2024-07-12 11:55:22.849028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.544 [2024-07-12 11:55:22.849127] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.805 [2024-07-12 11:55:23.088283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.805 [2024-07-12 11:55:23.104231] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.805 [2024-07-12 11:55:23.120279] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.805 [2024-07-12 11:55:23.133030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=950224 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 950224 /var/tmp/bdevperf.sock 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 950224 ']' 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.369 11:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:34.369 "subsystems": [ 00:18:34.369 { 00:18:34.369 "subsystem": "keyring", 00:18:34.369 "config": [] 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "subsystem": "iobuf", 00:18:34.369 "config": [ 00:18:34.369 { 00:18:34.369 "method": "iobuf_set_options", 00:18:34.369 "params": { 00:18:34.369 "small_pool_count": 8192, 00:18:34.369 "large_pool_count": 1024, 00:18:34.369 "small_bufsize": 8192, 00:18:34.369 "large_bufsize": 135168 00:18:34.369 } 00:18:34.369 } 00:18:34.369 ] 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "subsystem": "sock", 00:18:34.369 "config": [ 00:18:34.369 { 00:18:34.369 "method": "sock_set_default_impl", 00:18:34.369 "params": { 00:18:34.369 "impl_name": "posix" 00:18:34.369 } 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "method": "sock_impl_set_options", 00:18:34.369 "params": { 00:18:34.369 "impl_name": "ssl", 00:18:34.369 "recv_buf_size": 4096, 00:18:34.369 "send_buf_size": 4096, 00:18:34.369 "enable_recv_pipe": true, 00:18:34.369 "enable_quickack": false, 00:18:34.369 "enable_placement_id": 0, 00:18:34.369 "enable_zerocopy_send_server": true, 00:18:34.369 "enable_zerocopy_send_client": false, 00:18:34.369 "zerocopy_threshold": 0, 00:18:34.369 "tls_version": 0, 00:18:34.369 "enable_ktls": false 00:18:34.369 } 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "method": "sock_impl_set_options", 00:18:34.369 "params": { 00:18:34.369 "impl_name": "posix", 00:18:34.369 "recv_buf_size": 2097152, 00:18:34.369 "send_buf_size": 2097152, 00:18:34.369 "enable_recv_pipe": true, 00:18:34.369 "enable_quickack": false, 00:18:34.369 "enable_placement_id": 0, 00:18:34.369 "enable_zerocopy_send_server": true, 00:18:34.369 "enable_zerocopy_send_client": false, 00:18:34.369 "zerocopy_threshold": 0, 00:18:34.369 "tls_version": 0, 00:18:34.369 "enable_ktls": false 00:18:34.369 } 00:18:34.369 } 00:18:34.369 ] 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "subsystem": "vmd", 00:18:34.369 "config": [] 00:18:34.369 }, 00:18:34.369 { 00:18:34.369 "subsystem": "accel", 00:18:34.370 "config": [ 00:18:34.370 { 00:18:34.370 "method": "accel_set_options", 00:18:34.370 "params": { 00:18:34.370 "small_cache_size": 128, 00:18:34.370 "large_cache_size": 16, 00:18:34.370 "task_count": 2048, 00:18:34.370 "sequence_count": 2048, 00:18:34.370 "buf_count": 2048 00:18:34.370 } 00:18:34.370 } 00:18:34.370 ] 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "subsystem": "bdev", 00:18:34.370 "config": [ 00:18:34.370 { 00:18:34.370 "method": "bdev_set_options", 00:18:34.370 "params": { 00:18:34.370 "bdev_io_pool_size": 65535, 00:18:34.370 "bdev_io_cache_size": 256, 00:18:34.370 "bdev_auto_examine": true, 00:18:34.370 "iobuf_small_cache_size": 128, 00:18:34.370 "iobuf_large_cache_size": 16 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_raid_set_options", 00:18:34.370 "params": { 00:18:34.370 "process_window_size_kb": 1024 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_iscsi_set_options", 00:18:34.370 "params": { 00:18:34.370 "timeout_sec": 30 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_nvme_set_options", 00:18:34.370 "params": { 00:18:34.370 "action_on_timeout": "none", 00:18:34.370 "timeout_us": 0, 00:18:34.370 "timeout_admin_us": 0, 00:18:34.370 "keep_alive_timeout_ms": 10000, 00:18:34.370 "arbitration_burst": 0, 00:18:34.370 "low_priority_weight": 0, 00:18:34.370 "medium_priority_weight": 0, 00:18:34.370 "high_priority_weight": 0, 00:18:34.370 "nvme_adminq_poll_period_us": 10000, 00:18:34.370 "nvme_ioq_poll_period_us": 0, 00:18:34.370 "io_queue_requests": 512, 00:18:34.370 "delay_cmd_submit": true, 00:18:34.370 "transport_retry_count": 4, 00:18:34.370 "bdev_retry_count": 3, 00:18:34.370 "transport_ack_timeout": 0, 00:18:34.370 "ctrlr_loss_timeout_sec": 0, 00:18:34.370 "reconnect_delay_sec": 0, 00:18:34.370 "fast_io_fail_timeout_sec": 0, 00:18:34.370 "disable_auto_failback": false, 00:18:34.370 "generate_uuids": false, 00:18:34.370 "transport_tos": 0, 00:18:34.370 "nvme_error_stat": false, 00:18:34.370 "rdma_srq_size": 0, 00:18:34.370 "io_path_stat": false, 00:18:34.370 "allow_accel_sequence": false, 00:18:34.370 "rdma_max_cq_size": 0, 00:18:34.370 "rdma_cm_event_timeout_ms": 0, 00:18:34.370 "dhchap_digests": [ 00:18:34.370 "sha256", 00:18:34.370 "sha384", 00:18:34.370 "sha512" 00:18:34.370 ], 00:18:34.370 "dhchap_dhgroups": [ 00:18:34.370 "null", 00:18:34.370 "ffdhe2048", 00:18:34.370 "ffdhe3072", 00:18:34.370 "ffdhe4096", 00:18:34.370 "ffdhe6144", 00:18:34.370 "ffdhe8192" 00:18:34.370 ] 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_nvme_attach_controller", 00:18:34.370 "params": { 00:18:34.370 "name": "TLSTEST", 00:18:34.370 "trtype": "TCP", 00:18:34.370 "adrfam": "IPv4", 00:18:34.370 "traddr": "10.0.0.2", 00:18:34.370 "trsvcid": "4420", 00:18:34.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.370 "prchk_reftag": false, 00:18:34.370 "prchk_guard": false, 00:18:34.370 "ctrlr_loss_timeout_sec": 0, 00:18:34.370 "reconnect_delay_sec": 0, 00:18:34.370 "fast_io_fail_timeout_sec": 0, 00:18:34.370 "psk": "/tmp/tmp.zt4jvl7nly", 00:18:34.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.370 "hdgst": false, 00:18:34.370 "ddgst": false 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_nvme_set_hotplug", 00:18:34.370 "params": { 00:18:34.370 "period_us": 100000, 00:18:34.370 "enable": false 00:18:34.370 } 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "method": "bdev_wait_for_examine" 00:18:34.370 } 00:18:34.370 ] 00:18:34.370 }, 00:18:34.370 { 00:18:34.370 "subsystem": "nbd", 00:18:34.370 "config": [] 00:18:34.370 } 00:18:34.370 ] 00:18:34.370 }' 00:18:34.370 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:34.370 11:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.370 [2024-07-12 11:55:23.730255] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:34.370 [2024-07-12 11:55:23.730328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950224 ] 00:18:34.370 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.370 [2024-07-12 11:55:23.790953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.627 [2024-07-12 11:55:23.899368] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.627 [2024-07-12 11:55:24.070780] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.627 [2024-07-12 11:55:24.070937] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:35.564 11:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:35.564 11:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:35.564 11:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.564 Running I/O for 10 seconds... 00:18:45.556 00:18:45.556 Latency(us) 00:18:45.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.556 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.556 Verification LBA range: start 0x0 length 0x2000 00:18:45.556 TLSTESTn1 : 10.03 3530.22 13.79 0.00 0.00 36178.45 8835.22 40389.59 00:18:45.556 =================================================================================================================== 00:18:45.556 Total : 3530.22 13.79 0.00 0.00 36178.45 8835.22 40389.59 00:18:45.556 0 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 950224 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 950224 ']' 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 950224 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 950224 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 950224' 00:18:45.556 killing process with pid 950224 00:18:45.556 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 950224 00:18:45.556 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.556 00:18:45.556 Latency(us) 00:18:45.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.557 =================================================================================================================== 00:18:45.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.557 [2024-07-12 11:55:34.945920] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.557 11:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 950224 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 950087 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 950087 ']' 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 950087 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 950087 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 950087' 00:18:45.816 killing process with pid 950087 00:18:45.816 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 950087 00:18:45.816 [2024-07-12 11:55:35.240658] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.817 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 950087 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=951671 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 951671 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 951671 ']' 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:46.074 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.335 [2024-07-12 11:55:35.595924] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:46.335 [2024-07-12 11:55:35.596004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.335 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.335 [2024-07-12 11:55:35.660423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.335 [2024-07-12 11:55:35.769401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.335 [2024-07-12 11:55:35.769465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.335 [2024-07-12 11:55:35.769495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.335 [2024-07-12 11:55:35.769506] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.335 [2024-07-12 11:55:35.769516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.335 [2024-07-12 11:55:35.769544] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.zt4jvl7nly 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zt4jvl7nly 00:18:46.594 11:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.851 [2024-07-12 11:55:36.159663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.851 11:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.109 11:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.367 [2024-07-12 11:55:36.644990] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.367 [2024-07-12 11:55:36.645232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.367 11:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.626 malloc0 00:18:47.626 11:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.883 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zt4jvl7nly 00:18:48.140 [2024-07-12 11:55:37.394534] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:48.140 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=951840 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 951840 /var/tmp/bdevperf.sock 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 951840 ']' 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:48.141 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.141 [2024-07-12 11:55:37.458061] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:48.141 [2024-07-12 11:55:37.458132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951840 ] 00:18:48.141 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.141 [2024-07-12 11:55:37.521986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.400 [2024-07-12 11:55:37.640422] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.400 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:48.400 11:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:48.400 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zt4jvl7nly 00:18:48.657 11:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:48.915 [2024-07-12 11:55:38.220053] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.915 nvme0n1 00:18:48.915 11:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.915 Running I/O for 1 seconds... 00:18:50.290 00:18:50.290 Latency(us) 00:18:50.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.290 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:50.290 Verification LBA range: start 0x0 length 0x2000 00:18:50.290 nvme0n1 : 1.02 3459.49 13.51 0.00 0.00 36658.51 6213.78 35535.08 00:18:50.290 =================================================================================================================== 00:18:50.291 Total : 3459.49 13.51 0.00 0.00 36658.51 6213.78 35535.08 00:18:50.291 0 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 951840 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 951840 ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 951840 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 951840 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 951840' 00:18:50.291 killing process with pid 951840 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 951840 00:18:50.291 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.291 00:18:50.291 Latency(us) 00:18:50.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.291 =================================================================================================================== 00:18:50.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 951840 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 951671 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 951671 ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 951671 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 951671 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 951671' 00:18:50.291 killing process with pid 951671 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 951671 00:18:50.291 [2024-07-12 11:55:39.769352] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:50.291 11:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 951671 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=952234 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 952234 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 952234 ']' 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:50.859 11:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.859 [2024-07-12 11:55:40.106275] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:50.859 [2024-07-12 11:55:40.106369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.859 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.859 [2024-07-12 11:55:40.173988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.859 [2024-07-12 11:55:40.292097] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.859 [2024-07-12 11:55:40.292155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.859 [2024-07-12 11:55:40.292181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.859 [2024-07-12 11:55:40.292195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.859 [2024-07-12 11:55:40.292208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.859 [2024-07-12 11:55:40.292258] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.795 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:51.795 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:51.795 11:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.796 [2024-07-12 11:55:41.112003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.796 malloc0 00:18:51.796 [2024-07-12 11:55:41.144224] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.796 [2024-07-12 11:55:41.144471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=952388 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 952388 /var/tmp/bdevperf.sock 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 952388 ']' 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:51.796 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.796 [2024-07-12 11:55:41.214515] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:51.796 [2024-07-12 11:55:41.214591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952388 ] 00:18:51.796 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.796 [2024-07-12 11:55:41.276609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.053 [2024-07-12 11:55:41.396566] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.053 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:52.053 11:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:52.053 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zt4jvl7nly 00:18:52.311 11:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:52.570 [2024-07-12 11:55:41.967661] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.570 nvme0n1 00:18:52.570 11:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.829 Running I/O for 1 seconds... 00:18:53.768 00:18:53.768 Latency(us) 00:18:53.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.768 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.768 Verification LBA range: start 0x0 length 0x2000 00:18:53.768 nvme0n1 : 1.02 3567.77 13.94 0.00 0.00 35482.00 9369.22 26796.94 00:18:53.768 =================================================================================================================== 00:18:53.768 Total : 3567.77 13.94 0.00 0.00 35482.00 9369.22 26796.94 00:18:53.768 0 00:18:53.768 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:53.768 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.768 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.025 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.025 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:54.025 "subsystems": [ 00:18:54.025 { 00:18:54.025 "subsystem": "keyring", 00:18:54.025 "config": [ 00:18:54.025 { 00:18:54.025 "method": "keyring_file_add_key", 00:18:54.025 "params": { 00:18:54.025 "name": "key0", 00:18:54.025 "path": "/tmp/tmp.zt4jvl7nly" 00:18:54.025 } 00:18:54.025 } 00:18:54.025 ] 00:18:54.025 }, 00:18:54.025 { 00:18:54.025 "subsystem": "iobuf", 00:18:54.025 "config": [ 00:18:54.025 { 00:18:54.025 "method": "iobuf_set_options", 00:18:54.025 "params": { 00:18:54.025 "small_pool_count": 8192, 00:18:54.025 "large_pool_count": 1024, 00:18:54.025 "small_bufsize": 8192, 00:18:54.025 "large_bufsize": 135168 00:18:54.025 } 00:18:54.025 } 00:18:54.025 ] 00:18:54.025 }, 00:18:54.025 { 00:18:54.026 "subsystem": "sock", 00:18:54.026 "config": [ 00:18:54.026 { 00:18:54.026 "method": "sock_set_default_impl", 00:18:54.026 "params": { 00:18:54.026 "impl_name": "posix" 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "sock_impl_set_options", 00:18:54.026 "params": { 00:18:54.026 "impl_name": "ssl", 00:18:54.026 "recv_buf_size": 4096, 00:18:54.026 "send_buf_size": 4096, 00:18:54.026 "enable_recv_pipe": true, 00:18:54.026 "enable_quickack": false, 00:18:54.026 "enable_placement_id": 0, 00:18:54.026 "enable_zerocopy_send_server": true, 00:18:54.026 "enable_zerocopy_send_client": false, 00:18:54.026 "zerocopy_threshold": 0, 00:18:54.026 "tls_version": 0, 00:18:54.026 "enable_ktls": false 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "sock_impl_set_options", 00:18:54.026 "params": { 00:18:54.026 "impl_name": "posix", 00:18:54.026 "recv_buf_size": 2097152, 00:18:54.026 "send_buf_size": 2097152, 00:18:54.026 "enable_recv_pipe": true, 00:18:54.026 "enable_quickack": false, 00:18:54.026 "enable_placement_id": 0, 00:18:54.026 "enable_zerocopy_send_server": true, 00:18:54.026 "enable_zerocopy_send_client": false, 00:18:54.026 "zerocopy_threshold": 0, 00:18:54.026 "tls_version": 0, 00:18:54.026 "enable_ktls": false 00:18:54.026 } 00:18:54.026 } 00:18:54.026 ] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "vmd", 00:18:54.026 "config": [] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "accel", 00:18:54.026 "config": [ 00:18:54.026 { 00:18:54.026 "method": "accel_set_options", 00:18:54.026 "params": { 00:18:54.026 "small_cache_size": 128, 00:18:54.026 "large_cache_size": 16, 00:18:54.026 "task_count": 2048, 00:18:54.026 "sequence_count": 2048, 00:18:54.026 "buf_count": 2048 00:18:54.026 } 00:18:54.026 } 00:18:54.026 ] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "bdev", 00:18:54.026 "config": [ 00:18:54.026 { 00:18:54.026 "method": "bdev_set_options", 00:18:54.026 "params": { 00:18:54.026 "bdev_io_pool_size": 65535, 00:18:54.026 "bdev_io_cache_size": 256, 00:18:54.026 "bdev_auto_examine": true, 00:18:54.026 "iobuf_small_cache_size": 128, 00:18:54.026 "iobuf_large_cache_size": 16 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_raid_set_options", 00:18:54.026 "params": { 00:18:54.026 "process_window_size_kb": 1024 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_iscsi_set_options", 00:18:54.026 "params": { 00:18:54.026 "timeout_sec": 30 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_nvme_set_options", 00:18:54.026 "params": { 00:18:54.026 "action_on_timeout": "none", 00:18:54.026 "timeout_us": 0, 00:18:54.026 "timeout_admin_us": 0, 00:18:54.026 "keep_alive_timeout_ms": 10000, 00:18:54.026 "arbitration_burst": 0, 00:18:54.026 "low_priority_weight": 0, 00:18:54.026 "medium_priority_weight": 0, 00:18:54.026 "high_priority_weight": 0, 00:18:54.026 "nvme_adminq_poll_period_us": 10000, 00:18:54.026 "nvme_ioq_poll_period_us": 0, 00:18:54.026 "io_queue_requests": 0, 00:18:54.026 "delay_cmd_submit": true, 00:18:54.026 "transport_retry_count": 4, 00:18:54.026 "bdev_retry_count": 3, 00:18:54.026 "transport_ack_timeout": 0, 00:18:54.026 "ctrlr_loss_timeout_sec": 0, 00:18:54.026 "reconnect_delay_sec": 0, 00:18:54.026 "fast_io_fail_timeout_sec": 0, 00:18:54.026 "disable_auto_failback": false, 00:18:54.026 "generate_uuids": false, 00:18:54.026 "transport_tos": 0, 00:18:54.026 "nvme_error_stat": false, 00:18:54.026 "rdma_srq_size": 0, 00:18:54.026 "io_path_stat": false, 00:18:54.026 "allow_accel_sequence": false, 00:18:54.026 "rdma_max_cq_size": 0, 00:18:54.026 "rdma_cm_event_timeout_ms": 0, 00:18:54.026 "dhchap_digests": [ 00:18:54.026 "sha256", 00:18:54.026 "sha384", 00:18:54.026 "sha512" 00:18:54.026 ], 00:18:54.026 "dhchap_dhgroups": [ 00:18:54.026 "null", 00:18:54.026 "ffdhe2048", 00:18:54.026 "ffdhe3072", 00:18:54.026 "ffdhe4096", 00:18:54.026 "ffdhe6144", 00:18:54.026 "ffdhe8192" 00:18:54.026 ] 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_nvme_set_hotplug", 00:18:54.026 "params": { 00:18:54.026 "period_us": 100000, 00:18:54.026 "enable": false 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_malloc_create", 00:18:54.026 "params": { 00:18:54.026 "name": "malloc0", 00:18:54.026 "num_blocks": 8192, 00:18:54.026 "block_size": 4096, 00:18:54.026 "physical_block_size": 4096, 00:18:54.026 "uuid": "f3c49259-655c-406d-9e49-0b790819fea8", 00:18:54.026 "optimal_io_boundary": 0 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "bdev_wait_for_examine" 00:18:54.026 } 00:18:54.026 ] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "nbd", 00:18:54.026 "config": [] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "scheduler", 00:18:54.026 "config": [ 00:18:54.026 { 00:18:54.026 "method": "framework_set_scheduler", 00:18:54.026 "params": { 00:18:54.026 "name": "static" 00:18:54.026 } 00:18:54.026 } 00:18:54.026 ] 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "subsystem": "nvmf", 00:18:54.026 "config": [ 00:18:54.026 { 00:18:54.026 "method": "nvmf_set_config", 00:18:54.026 "params": { 00:18:54.026 "discovery_filter": "match_any", 00:18:54.026 "admin_cmd_passthru": { 00:18:54.026 "identify_ctrlr": false 00:18:54.026 } 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_set_max_subsystems", 00:18:54.026 "params": { 00:18:54.026 "max_subsystems": 1024 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_set_crdt", 00:18:54.026 "params": { 00:18:54.026 "crdt1": 0, 00:18:54.026 "crdt2": 0, 00:18:54.026 "crdt3": 0 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_create_transport", 00:18:54.026 "params": { 00:18:54.026 "trtype": "TCP", 00:18:54.026 "max_queue_depth": 128, 00:18:54.026 "max_io_qpairs_per_ctrlr": 127, 00:18:54.026 "in_capsule_data_size": 4096, 00:18:54.026 "max_io_size": 131072, 00:18:54.026 "io_unit_size": 131072, 00:18:54.026 "max_aq_depth": 128, 00:18:54.026 "num_shared_buffers": 511, 00:18:54.026 "buf_cache_size": 4294967295, 00:18:54.026 "dif_insert_or_strip": false, 00:18:54.026 "zcopy": false, 00:18:54.026 "c2h_success": false, 00:18:54.026 "sock_priority": 0, 00:18:54.026 "abort_timeout_sec": 1, 00:18:54.026 "ack_timeout": 0, 00:18:54.026 "data_wr_pool_size": 0 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_create_subsystem", 00:18:54.026 "params": { 00:18:54.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.026 "allow_any_host": false, 00:18:54.026 "serial_number": "00000000000000000000", 00:18:54.026 "model_number": "SPDK bdev Controller", 00:18:54.026 "max_namespaces": 32, 00:18:54.026 "min_cntlid": 1, 00:18:54.026 "max_cntlid": 65519, 00:18:54.026 "ana_reporting": false 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_subsystem_add_host", 00:18:54.026 "params": { 00:18:54.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.026 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.026 "psk": "key0" 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_subsystem_add_ns", 00:18:54.026 "params": { 00:18:54.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.026 "namespace": { 00:18:54.026 "nsid": 1, 00:18:54.026 "bdev_name": "malloc0", 00:18:54.026 "nguid": "F3C49259655C406D9E490B790819FEA8", 00:18:54.026 "uuid": "f3c49259-655c-406d-9e49-0b790819fea8", 00:18:54.026 "no_auto_visible": false 00:18:54.026 } 00:18:54.026 } 00:18:54.026 }, 00:18:54.026 { 00:18:54.026 "method": "nvmf_subsystem_add_listener", 00:18:54.026 "params": { 00:18:54.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.026 "listen_address": { 00:18:54.026 "trtype": "TCP", 00:18:54.026 "adrfam": "IPv4", 00:18:54.027 "traddr": "10.0.0.2", 00:18:54.027 "trsvcid": "4420" 00:18:54.027 }, 00:18:54.027 "secure_channel": true 00:18:54.027 } 00:18:54.027 } 00:18:54.027 ] 00:18:54.027 } 00:18:54.027 ] 00:18:54.027 }' 00:18:54.027 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:54.286 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:54.286 "subsystems": [ 00:18:54.286 { 00:18:54.286 "subsystem": "keyring", 00:18:54.286 "config": [ 00:18:54.286 { 00:18:54.286 "method": "keyring_file_add_key", 00:18:54.286 "params": { 00:18:54.286 "name": "key0", 00:18:54.286 "path": "/tmp/tmp.zt4jvl7nly" 00:18:54.286 } 00:18:54.286 } 00:18:54.286 ] 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "subsystem": "iobuf", 00:18:54.286 "config": [ 00:18:54.286 { 00:18:54.286 "method": "iobuf_set_options", 00:18:54.286 "params": { 00:18:54.286 "small_pool_count": 8192, 00:18:54.286 "large_pool_count": 1024, 00:18:54.286 "small_bufsize": 8192, 00:18:54.286 "large_bufsize": 135168 00:18:54.286 } 00:18:54.286 } 00:18:54.286 ] 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "subsystem": "sock", 00:18:54.286 "config": [ 00:18:54.286 { 00:18:54.286 "method": "sock_set_default_impl", 00:18:54.286 "params": { 00:18:54.286 "impl_name": "posix" 00:18:54.286 } 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "method": "sock_impl_set_options", 00:18:54.286 "params": { 00:18:54.286 "impl_name": "ssl", 00:18:54.286 "recv_buf_size": 4096, 00:18:54.286 "send_buf_size": 4096, 00:18:54.286 "enable_recv_pipe": true, 00:18:54.286 "enable_quickack": false, 00:18:54.286 "enable_placement_id": 0, 00:18:54.286 "enable_zerocopy_send_server": true, 00:18:54.286 "enable_zerocopy_send_client": false, 00:18:54.286 "zerocopy_threshold": 0, 00:18:54.286 "tls_version": 0, 00:18:54.286 "enable_ktls": false 00:18:54.286 } 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "method": "sock_impl_set_options", 00:18:54.286 "params": { 00:18:54.286 "impl_name": "posix", 00:18:54.286 "recv_buf_size": 2097152, 00:18:54.286 "send_buf_size": 2097152, 00:18:54.286 "enable_recv_pipe": true, 00:18:54.286 "enable_quickack": false, 00:18:54.286 "enable_placement_id": 0, 00:18:54.286 "enable_zerocopy_send_server": true, 00:18:54.286 "enable_zerocopy_send_client": false, 00:18:54.286 "zerocopy_threshold": 0, 00:18:54.286 "tls_version": 0, 00:18:54.286 "enable_ktls": false 00:18:54.286 } 00:18:54.286 } 00:18:54.286 ] 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "subsystem": "vmd", 00:18:54.286 "config": [] 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "subsystem": "accel", 00:18:54.286 "config": [ 00:18:54.286 { 00:18:54.286 "method": "accel_set_options", 00:18:54.286 "params": { 00:18:54.286 "small_cache_size": 128, 00:18:54.286 "large_cache_size": 16, 00:18:54.286 "task_count": 2048, 00:18:54.286 "sequence_count": 2048, 00:18:54.286 "buf_count": 2048 00:18:54.286 } 00:18:54.286 } 00:18:54.286 ] 00:18:54.286 }, 00:18:54.286 { 00:18:54.286 "subsystem": "bdev", 00:18:54.286 "config": [ 00:18:54.286 { 00:18:54.286 "method": "bdev_set_options", 00:18:54.286 "params": { 00:18:54.286 "bdev_io_pool_size": 65535, 00:18:54.286 "bdev_io_cache_size": 256, 00:18:54.287 "bdev_auto_examine": true, 00:18:54.287 "iobuf_small_cache_size": 128, 00:18:54.287 "iobuf_large_cache_size": 16 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_raid_set_options", 00:18:54.287 "params": { 00:18:54.287 "process_window_size_kb": 1024 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_iscsi_set_options", 00:18:54.287 "params": { 00:18:54.287 "timeout_sec": 30 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_nvme_set_options", 00:18:54.287 "params": { 00:18:54.287 "action_on_timeout": "none", 00:18:54.287 "timeout_us": 0, 00:18:54.287 "timeout_admin_us": 0, 00:18:54.287 "keep_alive_timeout_ms": 10000, 00:18:54.287 "arbitration_burst": 0, 00:18:54.287 "low_priority_weight": 0, 00:18:54.287 "medium_priority_weight": 0, 00:18:54.287 "high_priority_weight": 0, 00:18:54.287 "nvme_adminq_poll_period_us": 10000, 00:18:54.287 "nvme_ioq_poll_period_us": 0, 00:18:54.287 "io_queue_requests": 512, 00:18:54.287 "delay_cmd_submit": true, 00:18:54.287 "transport_retry_count": 4, 00:18:54.287 "bdev_retry_count": 3, 00:18:54.287 "transport_ack_timeout": 0, 00:18:54.287 "ctrlr_loss_timeout_sec": 0, 00:18:54.287 "reconnect_delay_sec": 0, 00:18:54.287 "fast_io_fail_timeout_sec": 0, 00:18:54.287 "disable_auto_failback": false, 00:18:54.287 "generate_uuids": false, 00:18:54.287 "transport_tos": 0, 00:18:54.287 "nvme_error_stat": false, 00:18:54.287 "rdma_srq_size": 0, 00:18:54.287 "io_path_stat": false, 00:18:54.287 "allow_accel_sequence": false, 00:18:54.287 "rdma_max_cq_size": 0, 00:18:54.287 "rdma_cm_event_timeout_ms": 0, 00:18:54.287 "dhchap_digests": [ 00:18:54.287 "sha256", 00:18:54.287 "sha384", 00:18:54.287 "sha512" 00:18:54.287 ], 00:18:54.287 "dhchap_dhgroups": [ 00:18:54.287 "null", 00:18:54.287 "ffdhe2048", 00:18:54.287 "ffdhe3072", 00:18:54.287 "ffdhe4096", 00:18:54.287 "ffdhe6144", 00:18:54.287 "ffdhe8192" 00:18:54.287 ] 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_nvme_attach_controller", 00:18:54.287 "params": { 00:18:54.287 "name": "nvme0", 00:18:54.287 "trtype": "TCP", 00:18:54.287 "adrfam": "IPv4", 00:18:54.287 "traddr": "10.0.0.2", 00:18:54.287 "trsvcid": "4420", 00:18:54.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.287 "prchk_reftag": false, 00:18:54.287 "prchk_guard": false, 00:18:54.287 "ctrlr_loss_timeout_sec": 0, 00:18:54.287 "reconnect_delay_sec": 0, 00:18:54.287 "fast_io_fail_timeout_sec": 0, 00:18:54.287 "psk": "key0", 00:18:54.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.287 "hdgst": false, 00:18:54.287 "ddgst": false 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_nvme_set_hotplug", 00:18:54.287 "params": { 00:18:54.287 "period_us": 100000, 00:18:54.287 "enable": false 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_enable_histogram", 00:18:54.287 "params": { 00:18:54.287 "name": "nvme0n1", 00:18:54.287 "enable": true 00:18:54.287 } 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "method": "bdev_wait_for_examine" 00:18:54.287 } 00:18:54.287 ] 00:18:54.287 }, 00:18:54.287 { 00:18:54.287 "subsystem": "nbd", 00:18:54.287 "config": [] 00:18:54.287 } 00:18:54.287 ] 00:18:54.287 }' 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 952388 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 952388 ']' 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 952388 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952388 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952388' 00:18:54.287 killing process with pid 952388 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 952388 00:18:54.287 Received shutdown signal, test time was about 1.000000 seconds 00:18:54.287 00:18:54.287 Latency(us) 00:18:54.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.287 =================================================================================================================== 00:18:54.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.287 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 952388 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 952234 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 952234 ']' 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 952234 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952234 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952234' 00:18:54.546 killing process with pid 952234 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 952234 00:18:54.546 11:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 952234 00:18:54.804 11:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:54.804 11:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.804 11:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:54.804 "subsystems": [ 00:18:54.804 { 00:18:54.804 "subsystem": "keyring", 00:18:54.804 "config": [ 00:18:54.804 { 00:18:54.804 "method": "keyring_file_add_key", 00:18:54.804 "params": { 00:18:54.804 "name": "key0", 00:18:54.804 "path": "/tmp/tmp.zt4jvl7nly" 00:18:54.804 } 00:18:54.804 } 00:18:54.804 ] 00:18:54.804 }, 00:18:54.804 { 00:18:54.804 "subsystem": "iobuf", 00:18:54.804 "config": [ 00:18:54.804 { 00:18:54.804 "method": "iobuf_set_options", 00:18:54.804 "params": { 00:18:54.804 "small_pool_count": 8192, 00:18:54.804 "large_pool_count": 1024, 00:18:54.804 "small_bufsize": 8192, 00:18:54.804 "large_bufsize": 135168 00:18:54.804 } 00:18:54.804 } 00:18:54.804 ] 00:18:54.804 }, 00:18:54.804 { 00:18:54.804 "subsystem": "sock", 00:18:54.804 "config": [ 00:18:54.804 { 00:18:54.804 "method": "sock_set_default_impl", 00:18:54.804 "params": { 00:18:54.804 "impl_name": "posix" 00:18:54.804 } 00:18:54.804 }, 00:18:54.804 { 00:18:54.804 "method": "sock_impl_set_options", 00:18:54.804 "params": { 00:18:54.804 "impl_name": "ssl", 00:18:54.804 "recv_buf_size": 4096, 00:18:54.804 "send_buf_size": 4096, 00:18:54.804 "enable_recv_pipe": true, 00:18:54.804 "enable_quickack": false, 00:18:54.804 "enable_placement_id": 0, 00:18:54.804 "enable_zerocopy_send_server": true, 00:18:54.804 "enable_zerocopy_send_client": false, 00:18:54.804 "zerocopy_threshold": 0, 00:18:54.804 "tls_version": 0, 00:18:54.804 "enable_ktls": false 00:18:54.804 } 00:18:54.804 }, 00:18:54.804 { 00:18:54.804 "method": "sock_impl_set_options", 00:18:54.804 "params": { 00:18:54.804 "impl_name": "posix", 00:18:54.804 "recv_buf_size": 2097152, 00:18:54.804 "send_buf_size": 2097152, 00:18:54.804 "enable_recv_pipe": true, 00:18:54.804 "enable_quickack": false, 00:18:54.804 "enable_placement_id": 0, 00:18:54.804 "enable_zerocopy_send_server": true, 00:18:54.804 "enable_zerocopy_send_client": false, 00:18:54.804 "zerocopy_threshold": 0, 00:18:54.805 "tls_version": 0, 00:18:54.805 "enable_ktls": false 00:18:54.805 } 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "vmd", 00:18:54.805 "config": [] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "accel", 00:18:54.805 "config": [ 00:18:54.805 { 00:18:54.805 "method": "accel_set_options", 00:18:54.805 "params": { 00:18:54.805 "small_cache_size": 128, 00:18:54.805 "large_cache_size": 16, 00:18:54.805 "task_count": 2048, 00:18:54.805 "sequence_count": 2048, 00:18:54.805 "buf_count": 2048 00:18:54.805 } 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "bdev", 00:18:54.805 "config": [ 00:18:54.805 { 00:18:54.805 "method": "bdev_set_options", 00:18:54.805 "params": { 00:18:54.805 "bdev_io_pool_size": 65535, 00:18:54.805 "bdev_io_cache_size": 256, 00:18:54.805 "bdev_auto_examine": true, 00:18:54.805 "iobuf_small_cache_size": 128, 00:18:54.805 "iobuf_large_cache_size": 16 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_raid_set_options", 00:18:54.805 "params": { 00:18:54.805 "process_window_size_kb": 1024 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_iscsi_set_options", 00:18:54.805 "params": { 00:18:54.805 "timeout_sec": 30 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_nvme_set_options", 00:18:54.805 "params": { 00:18:54.805 "action_on_timeout": "none", 00:18:54.805 "timeout_us": 0, 00:18:54.805 "timeout_admin_us": 0, 00:18:54.805 "keep_alive_timeout_ms": 10000, 00:18:54.805 "arbitration_burst": 0, 00:18:54.805 "low_priority_weight": 0, 00:18:54.805 "medium_priority_weight": 0, 00:18:54.805 "high_priority_weight": 0, 00:18:54.805 "nvme_adminq_poll_period_us": 10000, 00:18:54.805 "nvme_ioq_poll_period_us": 0, 00:18:54.805 "io_queue_requests": 0, 00:18:54.805 "delay_cmd_submit": true, 00:18:54.805 "transport_retry_count": 4, 00:18:54.805 "bdev_retry_count": 3, 00:18:54.805 "transport_ack_timeout": 0, 00:18:54.805 "ctrlr_loss_timeout_sec": 0, 00:18:54.805 "reconnect_delay_sec": 0, 00:18:54.805 "fast_io_fail_timeout_sec": 0, 00:18:54.805 "disable_auto_failback": false, 00:18:54.805 "generate_uuids": false, 00:18:54.805 "transport_tos": 0, 00:18:54.805 "nvme_error_stat": false, 00:18:54.805 "rdma_srq_size": 0, 00:18:54.805 "io_path_stat": false, 00:18:54.805 "allow_accel_sequence": false, 00:18:54.805 "rdma_max_cq_size": 0, 00:18:54.805 "rdma_cm_event_timeout_ms": 0, 00:18:54.805 "dhchap_digests": [ 00:18:54.805 "sha256", 00:18:54.805 "sha384", 00:18:54.805 "sha512" 00:18:54.805 ], 00:18:54.805 "dhchap_dhgroups": [ 00:18:54.805 "null", 00:18:54.805 "ffdhe2048", 00:18:54.805 "ffdhe3072", 00:18:54.805 "ffdhe4096", 00:18:54.805 "ffdhe6144", 00:18:54.805 "ffdhe8192" 00:18:54.805 ] 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_nvme_set_hotplug", 00:18:54.805 "params": { 00:18:54.805 "period_us": 100000, 00:18:54.805 "enable": false 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_malloc_create", 00:18:54.805 "params": { 00:18:54.805 "name": "malloc0", 00:18:54.805 "num_blocks": 8192, 00:18:54.805 "block_size": 4096, 00:18:54.805 "physical_block_size": 4096, 00:18:54.805 "uuid": "f3c49259-655c-406d-9e49-0b790819fea8", 00:18:54.805 "optimal_io_boundary": 0 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "bdev_wait_for_examine" 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "nbd", 00:18:54.805 "config": [] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "scheduler", 00:18:54.805 "config": [ 00:18:54.805 { 00:18:54.805 "method": "framework_set_scheduler", 00:18:54.805 "params": { 00:18:54.805 "name": "static" 00:18:54.805 } 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "subsystem": "nvmf", 00:18:54.805 "config": [ 00:18:54.805 { 00:18:54.805 "method": "nvmf_set_config", 00:18:54.805 "params": { 00:18:54.805 "discovery_filter": "match_any", 00:18:54.805 "admin_cmd_passthru": { 00:18:54.805 "identify_ctrlr": false 00:18:54.805 } 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_set_max_subsystems", 00:18:54.805 "params": { 00:18:54.805 "max_subsystems": 1024 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_set_crdt", 00:18:54.805 "params": { 00:18:54.805 "crdt1": 0, 00:18:54.805 "crdt2": 0, 00:18:54.805 "crdt3": 0 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_create_transport", 00:18:54.805 "params": { 00:18:54.805 "trtype": "TCP", 00:18:54.805 "max_queue_depth": 128, 00:18:54.805 "max_io_qpairs_per_ctrlr": 127, 00:18:54.805 "in_capsule_data_size": 4096, 00:18:54.805 "max_io_size": 131072, 00:18:54.805 "io_unit_size": 131072, 00:18:54.805 "max_aq_depth": 128, 00:18:54.805 "num_shared_buffers": 511, 00:18:54.805 "buf_cache_size": 4294967295, 00:18:54.805 "dif_insert_or_strip": false, 00:18:54.805 "zcopy": false, 00:18:54.805 "c2h_success": false, 00:18:54.805 "sock_priority": 0, 00:18:54.805 "abort_timeout_sec": 1, 00:18:54.805 "ack_timeout": 0, 00:18:54.805 "data_wr_pool_size": 0 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_create_subsystem", 00:18:54.805 "params": { 00:18:54.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.805 "allow_any_host": false, 00:18:54.805 "serial_number": "00000000000000000000", 00:18:54.805 "model_number": "SPDK bdev Controller", 00:18:54.805 "max_namespaces": 32, 00:18:54.805 "min_cntlid": 1, 00:18:54.805 "max_cntlid": 65519, 00:18:54.805 "ana_reporting": false 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_subsystem_add_host", 00:18:54.805 "params": { 00:18:54.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.805 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.805 "psk": "key0" 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_subsystem_add_ns", 00:18:54.805 "params": { 00:18:54.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.805 "namespace": { 00:18:54.805 "nsid": 1, 00:18:54.805 "bdev_name": "malloc0", 00:18:54.805 "nguid": "F3C49259655C406D9E490B790819FEA8", 00:18:54.805 "uuid": "f3c49259-655c-406d-9e49-0b790819fea8", 00:18:54.805 "no_auto_visible": false 00:18:54.805 } 00:18:54.805 } 00:18:54.805 }, 00:18:54.805 { 00:18:54.805 "method": "nvmf_subsystem_add_listener", 00:18:54.805 "params": { 00:18:54.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.805 "listen_address": { 00:18:54.805 "trtype": "TCP", 00:18:54.805 "adrfam": "IPv4", 00:18:54.805 "traddr": "10.0.0.2", 00:18:54.805 "trsvcid": "4420" 00:18:54.805 }, 00:18:54.805 "secure_channel": true 00:18:54.805 } 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 } 00:18:54.805 ] 00:18:54.805 }' 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=952685 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 952685 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 952685 ']' 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:54.805 11:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.063 [2024-07-12 11:55:44.341210] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:55.063 [2024-07-12 11:55:44.341296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.063 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.063 [2024-07-12 11:55:44.409442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.063 [2024-07-12 11:55:44.527244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.063 [2024-07-12 11:55:44.527293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.063 [2024-07-12 11:55:44.527309] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.063 [2024-07-12 11:55:44.527323] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.063 [2024-07-12 11:55:44.527334] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.063 [2024-07-12 11:55:44.527408] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.322 [2024-07-12 11:55:44.765674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.322 [2024-07-12 11:55:44.797692] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.322 [2024-07-12 11:55:44.807059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=952831 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 952831 /var/tmp/bdevperf.sock 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 952831 ']' 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.889 11:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:55.889 "subsystems": [ 00:18:55.889 { 00:18:55.889 "subsystem": "keyring", 00:18:55.889 "config": [ 00:18:55.889 { 00:18:55.889 "method": "keyring_file_add_key", 00:18:55.889 "params": { 00:18:55.889 "name": "key0", 00:18:55.889 "path": "/tmp/tmp.zt4jvl7nly" 00:18:55.889 } 00:18:55.889 } 00:18:55.889 ] 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "subsystem": "iobuf", 00:18:55.889 "config": [ 00:18:55.889 { 00:18:55.889 "method": "iobuf_set_options", 00:18:55.889 "params": { 00:18:55.889 "small_pool_count": 8192, 00:18:55.889 "large_pool_count": 1024, 00:18:55.889 "small_bufsize": 8192, 00:18:55.889 "large_bufsize": 135168 00:18:55.889 } 00:18:55.889 } 00:18:55.889 ] 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "subsystem": "sock", 00:18:55.889 "config": [ 00:18:55.889 { 00:18:55.889 "method": "sock_set_default_impl", 00:18:55.889 "params": { 00:18:55.889 "impl_name": "posix" 00:18:55.889 } 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "method": "sock_impl_set_options", 00:18:55.889 "params": { 00:18:55.889 "impl_name": "ssl", 00:18:55.889 "recv_buf_size": 4096, 00:18:55.889 "send_buf_size": 4096, 00:18:55.889 "enable_recv_pipe": true, 00:18:55.889 "enable_quickack": false, 00:18:55.889 "enable_placement_id": 0, 00:18:55.889 "enable_zerocopy_send_server": true, 00:18:55.889 "enable_zerocopy_send_client": false, 00:18:55.889 "zerocopy_threshold": 0, 00:18:55.889 "tls_version": 0, 00:18:55.889 "enable_ktls": false 00:18:55.889 } 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "method": "sock_impl_set_options", 00:18:55.889 "params": { 00:18:55.889 "impl_name": "posix", 00:18:55.889 "recv_buf_size": 2097152, 00:18:55.889 "send_buf_size": 2097152, 00:18:55.889 "enable_recv_pipe": true, 00:18:55.889 "enable_quickack": false, 00:18:55.889 "enable_placement_id": 0, 00:18:55.889 "enable_zerocopy_send_server": true, 00:18:55.889 "enable_zerocopy_send_client": false, 00:18:55.889 "zerocopy_threshold": 0, 00:18:55.889 "tls_version": 0, 00:18:55.889 "enable_ktls": false 00:18:55.889 } 00:18:55.889 } 00:18:55.889 ] 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "subsystem": "vmd", 00:18:55.889 "config": [] 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "subsystem": "accel", 00:18:55.889 "config": [ 00:18:55.889 { 00:18:55.889 "method": "accel_set_options", 00:18:55.889 "params": { 00:18:55.889 "small_cache_size": 128, 00:18:55.889 "large_cache_size": 16, 00:18:55.889 "task_count": 2048, 00:18:55.889 "sequence_count": 2048, 00:18:55.889 "buf_count": 2048 00:18:55.889 } 00:18:55.889 } 00:18:55.889 ] 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "subsystem": "bdev", 00:18:55.889 "config": [ 00:18:55.889 { 00:18:55.889 "method": "bdev_set_options", 00:18:55.889 "params": { 00:18:55.889 "bdev_io_pool_size": 65535, 00:18:55.889 "bdev_io_cache_size": 256, 00:18:55.889 "bdev_auto_examine": true, 00:18:55.889 "iobuf_small_cache_size": 128, 00:18:55.889 "iobuf_large_cache_size": 16 00:18:55.889 } 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "method": "bdev_raid_set_options", 00:18:55.889 "params": { 00:18:55.889 "process_window_size_kb": 1024 00:18:55.889 } 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "method": "bdev_iscsi_set_options", 00:18:55.889 "params": { 00:18:55.889 "timeout_sec": 30 00:18:55.889 } 00:18:55.889 }, 00:18:55.889 { 00:18:55.889 "method": "bdev_nvme_set_options", 00:18:55.889 "params": { 00:18:55.889 "action_on_timeout": "none", 00:18:55.889 "timeout_us": 0, 00:18:55.889 "timeout_admin_us": 0, 00:18:55.889 "keep_alive_timeout_ms": 10000, 00:18:55.889 "arbitration_burst": 0, 00:18:55.889 "low_priority_weight": 0, 00:18:55.889 "medium_priority_weight": 0, 00:18:55.889 "high_priority_weight": 0, 00:18:55.889 "nvme_adminq_poll_period_us": 10000, 00:18:55.889 "nvme_ioq_poll_period_us": 0, 00:18:55.889 "io_queue_requests": 512, 00:18:55.889 "delay_cmd_submit": true, 00:18:55.889 "transport_retry_count": 4, 00:18:55.890 "bdev_retry_count": 3, 00:18:55.890 "transport_ack_timeout": 0, 00:18:55.890 "ctrlr_loss_timeout_sec": 0, 00:18:55.890 "reconnect_delay_sec": 0, 00:18:55.890 "fast_io_fail_timeout_sec": 0, 00:18:55.890 "disable_auto_failback": false, 00:18:55.890 "generate_uuids": false, 00:18:55.890 "transport_tos": 0, 00:18:55.890 "nvme_error_stat": false, 00:18:55.890 "rdma_srq_size": 0, 00:18:55.890 "io_path_stat": false, 00:18:55.890 "allow_accel_sequence": false, 00:18:55.890 "rdma_max_cq_size": 0, 00:18:55.890 "rdma_cm_event_timeout_ms": 0, 00:18:55.890 "dhchap_digests": [ 00:18:55.890 "sha256", 00:18:55.890 "sha384", 00:18:55.890 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.890 a512" 00:18:55.890 ], 00:18:55.890 "dhchap_dhgroups": [ 00:18:55.890 "null", 00:18:55.890 "ffdhe2048", 00:18:55.890 "ffdhe3072", 00:18:55.890 "ffdhe4096", 00:18:55.890 "ffdhe6144", 00:18:55.890 "ffdhe8192" 00:18:55.890 ] 00:18:55.890 } 00:18:55.890 }, 00:18:55.890 { 00:18:55.890 "method": "bdev_nvme_attach_controller", 00:18:55.890 "params": { 00:18:55.890 "name": "nvme0", 00:18:55.890 "trtype": "TCP", 00:18:55.890 "adrfam": "IPv4", 00:18:55.890 "traddr": "10.0.0.2", 00:18:55.890 "trsvcid": "4420", 00:18:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.890 "prchk_reftag": false, 00:18:55.890 "prchk_guard": false, 00:18:55.890 "ctrlr_loss_timeout_sec": 0, 00:18:55.890 "reconnect_delay_sec": 0, 00:18:55.890 "fast_io_fail_timeout_sec": 0, 00:18:55.890 "psk": "key0", 00:18:55.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.890 "hdgst": false, 00:18:55.890 "ddgst": false 00:18:55.890 } 00:18:55.890 }, 00:18:55.890 { 00:18:55.890 "method": "bdev_nvme_set_hotplug", 00:18:55.890 "params": { 00:18:55.890 "period_us": 100000, 00:18:55.890 "enable": false 00:18:55.890 } 00:18:55.890 }, 00:18:55.890 { 00:18:55.890 "method": "bdev_enable_histogram", 00:18:55.890 "params": { 00:18:55.890 "name": "nvme0n1", 00:18:55.890 "enable": true 00:18:55.890 } 00:18:55.890 }, 00:18:55.890 { 00:18:55.890 "method": "bdev_wait_for_examine" 00:18:55.890 } 00:18:55.890 ] 00:18:55.890 }, 00:18:55.890 { 00:18:55.890 "subsystem": "nbd", 00:18:55.890 "config": [] 00:18:55.890 } 00:18:55.890 ] 00:18:55.890 }' 00:18:55.890 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:55.890 11:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 [2024-07-12 11:55:45.384448] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:18:56.150 [2024-07-12 11:55:45.384518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952831 ] 00:18:56.150 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.150 [2024-07-12 11:55:45.449704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.150 [2024-07-12 11:55:45.569100] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.409 [2024-07-12 11:55:45.755336] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.009 11:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:57.009 11:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:18:57.009 11:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.009 11:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:57.267 11:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.267 11:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.267 Running I/O for 1 seconds... 00:18:58.647 00:18:58.647 Latency(us) 00:18:58.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.647 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.647 Verification LBA range: start 0x0 length 0x2000 00:18:58.647 nvme0n1 : 1.03 3422.91 13.37 0.00 0.00 36866.18 6505.05 38253.61 00:18:58.647 =================================================================================================================== 00:18:58.647 Total : 3422.91 13.37 0.00 0.00 36866.18 6505.05 38253.61 00:18:58.647 0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:58.647 nvmf_trace.0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 952831 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 952831 ']' 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 952831 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952831 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952831' 00:18:58.647 killing process with pid 952831 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 952831 00:18:58.647 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.647 00:18:58.647 Latency(us) 00:18:58.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.647 =================================================================================================================== 00:18:58.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.647 11:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 952831 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.647 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.647 rmmod nvme_tcp 00:18:58.647 rmmod nvme_fabrics 00:18:58.647 rmmod nvme_keyring 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 952685 ']' 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 952685 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 952685 ']' 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 952685 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952685 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952685' 00:18:58.907 killing process with pid 952685 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 952685 00:18:58.907 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 952685 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.167 11:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.120 11:55:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.120 11:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WKA2S5YOdE /tmp/tmp.bTuwrKCJwD /tmp/tmp.zt4jvl7nly 00:19:01.120 00:19:01.120 real 1m22.283s 00:19:01.120 user 2m10.358s 00:19:01.120 sys 0m25.684s 00:19:01.120 11:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:01.120 11:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.120 ************************************ 00:19:01.120 END TEST nvmf_tls 00:19:01.120 ************************************ 00:19:01.120 11:55:50 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:01.121 11:55:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:01.121 11:55:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:01.121 11:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.121 ************************************ 00:19:01.121 START TEST nvmf_fips 00:19:01.121 ************************************ 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:01.121 * Looking for test storage... 00:19:01.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:01.121 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:01.378 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:19:01.379 Error setting digest 00:19:01.379 00C2FBBDC57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:01.379 00C2FBBDC57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.379 11:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:03.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:03.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:03.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:03.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.908 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:03.909 00:19:03.909 --- 10.0.0.2 ping statistics --- 00:19:03.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.909 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:03.909 00:19:03.909 --- 10.0.0.1 ping statistics --- 00:19:03.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.909 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=955199 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 955199 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 955199 ']' 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:03.909 11:55:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.909 [2024-07-12 11:55:53.031948] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:19:03.909 [2024-07-12 11:55:53.032029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.909 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.909 [2024-07-12 11:55:53.104906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.909 [2024-07-12 11:55:53.220193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.909 [2024-07-12 11:55:53.220276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.909 [2024-07-12 11:55:53.220293] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.909 [2024-07-12 11:55:53.220306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.909 [2024-07-12 11:55:53.220318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.909 [2024-07-12 11:55:53.220356] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.844 11:55:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.844 [2024-07-12 11:55:54.267892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.844 [2024-07-12 11:55:54.283884] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.844 [2024-07-12 11:55:54.284090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.844 [2024-07-12 11:55:54.316511] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:04.844 malloc0 00:19:04.844 11:55:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=955352 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 955352 /var/tmp/bdevperf.sock 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 955352 ']' 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:05.102 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.103 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:05.103 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 [2024-07-12 11:55:54.411196] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:19:05.103 [2024-07-12 11:55:54.411268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955352 ] 00:19:05.103 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.103 [2024-07-12 11:55:54.470040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.103 [2024-07-12 11:55:54.586398] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.360 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:05.360 11:55:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:19:05.360 11:55:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:05.618 [2024-07-12 11:55:54.964213] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.618 [2024-07-12 11:55:54.964353] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:05.618 TLSTESTn1 00:19:05.618 11:55:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.876 Running I/O for 10 seconds... 00:19:15.853 00:19:15.853 Latency(us) 00:19:15.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.853 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:15.853 Verification LBA range: start 0x0 length 0x2000 00:19:15.853 TLSTESTn1 : 10.02 3569.46 13.94 0.00 0.00 35797.28 8738.13 36117.62 00:19:15.853 =================================================================================================================== 00:19:15.853 Total : 3569.46 13.94 0.00 0.00 35797.28 8738.13 36117.62 00:19:15.853 0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:15.854 nvmf_trace.0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 955352 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 955352 ']' 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 955352 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 955352 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 955352' 00:19:15.854 killing process with pid 955352 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 955352 00:19:15.854 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.854 00:19:15.854 Latency(us) 00:19:15.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.854 =================================================================================================================== 00:19:15.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.854 [2024-07-12 11:56:05.301304] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.854 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 955352 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.111 rmmod nvme_tcp 00:19:16.111 rmmod nvme_fabrics 00:19:16.111 rmmod nvme_keyring 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 955199 ']' 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 955199 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 955199 ']' 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 955199 00:19:16.111 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 955199 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 955199' 00:19:16.370 killing process with pid 955199 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 955199 00:19:16.370 [2024-07-12 11:56:05.629986] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:16.370 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 955199 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.629 11:56:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.534 11:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.534 11:56:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:18.534 00:19:18.534 real 0m17.422s 00:19:18.534 user 0m22.650s 00:19:18.534 sys 0m5.369s 00:19:18.534 11:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:18.534 11:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.534 ************************************ 00:19:18.534 END TEST nvmf_fips 00:19:18.534 ************************************ 00:19:18.534 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:18.534 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:18.534 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:18.534 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:18.534 11:56:07 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.534 11:56:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:20.434 11:56:09 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:20.434 11:56:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:20.434 11:56:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:20.434 11:56:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.434 ************************************ 00:19:20.434 START TEST nvmf_perf_adq 00:19:20.434 ************************************ 00:19:20.434 11:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:20.693 * Looking for test storage... 00:19:20.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.693 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.694 11:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:20.694 11:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.694 11:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:22.628 11:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:23.194 11:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:25.097 11:56:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.378 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:30.379 00:19:30.379 --- 10.0.0.2 ping statistics --- 00:19:30.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.379 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:19:30.379 00:19:30.379 --- 10.0.0.1 ping statistics --- 00:19:30.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.379 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=961103 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 961103 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 961103 ']' 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:30.379 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.379 [2024-07-12 11:56:19.750428] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:19:30.379 [2024-07-12 11:56:19.750509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.379 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.379 [2024-07-12 11:56:19.821788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.638 [2024-07-12 11:56:19.945346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.638 [2024-07-12 11:56:19.945424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.638 [2024-07-12 11:56:19.945440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.638 [2024-07-12 11:56:19.945454] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.638 [2024-07-12 11:56:19.945465] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.638 [2024-07-12 11:56:19.945534] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.638 [2024-07-12 11:56:19.945584] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.638 [2024-07-12 11:56:19.945780] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.638 [2024-07-12 11:56:19.945784] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.638 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:30.638 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:19:30.638 11:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.638 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:30.638 11:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.638 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 [2024-07-12 11:56:20.185830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 Malloc1 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.897 [2024-07-12 11:56:20.239046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=961251 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:30.897 11:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:30.897 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.801 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:32.801 11:56:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.801 11:56:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 11:56:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.801 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:32.801 "tick_rate": 2700000000, 00:19:32.801 "poll_groups": [ 00:19:32.801 { 00:19:32.801 "name": "nvmf_tgt_poll_group_000", 00:19:32.801 "admin_qpairs": 1, 00:19:32.801 "io_qpairs": 1, 00:19:32.801 "current_admin_qpairs": 1, 00:19:32.801 "current_io_qpairs": 1, 00:19:32.801 "pending_bdev_io": 0, 00:19:32.801 "completed_nvme_io": 20133, 00:19:32.801 "transports": [ 00:19:32.801 { 00:19:32.801 "trtype": "TCP" 00:19:32.801 } 00:19:32.801 ] 00:19:32.801 }, 00:19:32.801 { 00:19:32.801 "name": "nvmf_tgt_poll_group_001", 00:19:32.802 "admin_qpairs": 0, 00:19:32.802 "io_qpairs": 1, 00:19:32.802 "current_admin_qpairs": 0, 00:19:32.802 "current_io_qpairs": 1, 00:19:32.802 "pending_bdev_io": 0, 00:19:32.802 "completed_nvme_io": 19688, 00:19:32.802 "transports": [ 00:19:32.802 { 00:19:32.802 "trtype": "TCP" 00:19:32.802 } 00:19:32.802 ] 00:19:32.802 }, 00:19:32.802 { 00:19:32.802 "name": "nvmf_tgt_poll_group_002", 00:19:32.802 "admin_qpairs": 0, 00:19:32.802 "io_qpairs": 1, 00:19:32.802 "current_admin_qpairs": 0, 00:19:32.802 "current_io_qpairs": 1, 00:19:32.802 "pending_bdev_io": 0, 00:19:32.802 "completed_nvme_io": 20393, 00:19:32.802 "transports": [ 00:19:32.802 { 00:19:32.802 "trtype": "TCP" 00:19:32.802 } 00:19:32.802 ] 00:19:32.802 }, 00:19:32.802 { 00:19:32.802 "name": "nvmf_tgt_poll_group_003", 00:19:32.802 "admin_qpairs": 0, 00:19:32.802 "io_qpairs": 1, 00:19:32.802 "current_admin_qpairs": 0, 00:19:32.802 "current_io_qpairs": 1, 00:19:32.802 "pending_bdev_io": 0, 00:19:32.802 "completed_nvme_io": 19703, 00:19:32.802 "transports": [ 00:19:32.802 { 00:19:32.802 "trtype": "TCP" 00:19:32.802 } 00:19:32.802 ] 00:19:32.802 } 00:19:32.802 ] 00:19:32.802 }' 00:19:32.802 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:32.802 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:33.061 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:33.061 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:33.061 11:56:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 961251 00:19:41.175 Initializing NVMe Controllers 00:19:41.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:41.175 Initialization complete. Launching workers. 00:19:41.175 ======================================================== 00:19:41.175 Latency(us) 00:19:41.175 Device Information : IOPS MiB/s Average min max 00:19:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10464.90 40.88 6117.07 2152.61 10234.97 00:19:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10282.10 40.16 6225.61 1485.24 10397.03 00:19:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10744.30 41.97 5956.18 2116.92 9550.83 00:19:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10538.70 41.17 6072.32 2199.75 10468.88 00:19:41.175 ======================================================== 00:19:41.175 Total : 42029.99 164.18 6091.27 1485.24 10468.88 00:19:41.175 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.175 rmmod nvme_tcp 00:19:41.175 rmmod nvme_fabrics 00:19:41.175 rmmod nvme_keyring 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 961103 ']' 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 961103 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 961103 ']' 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 961103 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 961103 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 961103' 00:19:41.175 killing process with pid 961103 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 961103 00:19:41.175 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 961103 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.483 11:56:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.384 11:56:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.384 11:56:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:43.384 11:56:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:44.329 11:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:46.232 11:56:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:51.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:51.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:51.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:51.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:19:51.534 00:19:51.534 --- 10.0.0.2 ping statistics --- 00:19:51.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.534 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:51.534 00:19:51.534 --- 10.0.0.1 ping statistics --- 00:19:51.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.534 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:51.534 net.core.busy_poll = 1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:51.534 net.core.busy_read = 1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=963879 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 963879 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 963879 ']' 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:51.534 11:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.534 [2024-07-12 11:56:40.824625] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:19:51.534 [2024-07-12 11:56:40.824701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.534 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.534 [2024-07-12 11:56:40.888970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.534 [2024-07-12 11:56:40.995497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.534 [2024-07-12 11:56:40.995551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.534 [2024-07-12 11:56:40.995574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.534 [2024-07-12 11:56:40.995585] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.534 [2024-07-12 11:56:40.995594] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.534 [2024-07-12 11:56:40.995674] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.534 [2024-07-12 11:56:40.995741] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.534 [2024-07-12 11:56:40.995806] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.534 [2024-07-12 11:56:40.995808] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.791 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.792 [2024-07-12 11:56:41.216767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.792 Malloc1 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.792 [2024-07-12 11:56:41.270023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=963932 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:51.792 11:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:52.050 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:53.947 "tick_rate": 2700000000, 00:19:53.947 "poll_groups": [ 00:19:53.947 { 00:19:53.947 "name": "nvmf_tgt_poll_group_000", 00:19:53.947 "admin_qpairs": 1, 00:19:53.947 "io_qpairs": 2, 00:19:53.947 "current_admin_qpairs": 1, 00:19:53.947 "current_io_qpairs": 2, 00:19:53.947 "pending_bdev_io": 0, 00:19:53.947 "completed_nvme_io": 25944, 00:19:53.947 "transports": [ 00:19:53.947 { 00:19:53.947 "trtype": "TCP" 00:19:53.947 } 00:19:53.947 ] 00:19:53.947 }, 00:19:53.947 { 00:19:53.947 "name": "nvmf_tgt_poll_group_001", 00:19:53.947 "admin_qpairs": 0, 00:19:53.947 "io_qpairs": 2, 00:19:53.947 "current_admin_qpairs": 0, 00:19:53.947 "current_io_qpairs": 2, 00:19:53.947 "pending_bdev_io": 0, 00:19:53.947 "completed_nvme_io": 25861, 00:19:53.947 "transports": [ 00:19:53.947 { 00:19:53.947 "trtype": "TCP" 00:19:53.947 } 00:19:53.947 ] 00:19:53.947 }, 00:19:53.947 { 00:19:53.947 "name": "nvmf_tgt_poll_group_002", 00:19:53.947 "admin_qpairs": 0, 00:19:53.947 "io_qpairs": 0, 00:19:53.947 "current_admin_qpairs": 0, 00:19:53.947 "current_io_qpairs": 0, 00:19:53.947 "pending_bdev_io": 0, 00:19:53.947 "completed_nvme_io": 0, 00:19:53.947 "transports": [ 00:19:53.947 { 00:19:53.947 "trtype": "TCP" 00:19:53.947 } 00:19:53.947 ] 00:19:53.947 }, 00:19:53.947 { 00:19:53.947 "name": "nvmf_tgt_poll_group_003", 00:19:53.947 "admin_qpairs": 0, 00:19:53.947 "io_qpairs": 0, 00:19:53.947 "current_admin_qpairs": 0, 00:19:53.947 "current_io_qpairs": 0, 00:19:53.947 "pending_bdev_io": 0, 00:19:53.947 "completed_nvme_io": 0, 00:19:53.947 "transports": [ 00:19:53.947 { 00:19:53.947 "trtype": "TCP" 00:19:53.947 } 00:19:53.947 ] 00:19:53.947 } 00:19:53.947 ] 00:19:53.947 }' 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:53.947 11:56:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 963932 00:20:02.046 Initializing NVMe Controllers 00:20:02.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:02.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:02.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:02.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:02.046 Initialization complete. Launching workers. 00:20:02.046 ======================================================== 00:20:02.046 Latency(us) 00:20:02.046 Device Information : IOPS MiB/s Average min max 00:20:02.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6616.74 25.85 9672.35 1654.66 55626.17 00:20:02.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7087.83 27.69 9030.18 1708.49 54418.93 00:20:02.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7007.63 27.37 9158.08 1869.12 55047.98 00:20:02.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6410.84 25.04 9984.00 1793.56 54792.60 00:20:02.046 ======================================================== 00:20:02.046 Total : 27123.03 105.95 9445.33 1654.66 55626.17 00:20:02.046 00:20:02.046 11:56:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:02.046 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.046 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:02.046 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.046 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.047 rmmod nvme_tcp 00:20:02.047 rmmod nvme_fabrics 00:20:02.047 rmmod nvme_keyring 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 963879 ']' 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 963879 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 963879 ']' 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 963879 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 963879 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 963879' 00:20:02.047 killing process with pid 963879 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 963879 00:20:02.047 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 963879 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.629 11:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.531 11:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.531 11:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:04.531 00:20:04.531 real 0m44.007s 00:20:04.531 user 2m39.862s 00:20:04.531 sys 0m9.304s 00:20:04.531 11:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:04.531 11:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.531 ************************************ 00:20:04.531 END TEST nvmf_perf_adq 00:20:04.531 ************************************ 00:20:04.531 11:56:53 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:04.531 11:56:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:04.531 11:56:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:04.531 11:56:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.531 ************************************ 00:20:04.531 START TEST nvmf_shutdown 00:20:04.531 ************************************ 00:20:04.531 11:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:04.531 * Looking for test storage... 00:20:04.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.531 11:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.531 11:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:04.531 11:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:04.790 ************************************ 00:20:04.790 START TEST nvmf_shutdown_tc1 00:20:04.790 ************************************ 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.790 11:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:06.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:06.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:06.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:06.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.692 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:20:06.951 00:20:06.951 --- 10.0.0.2 ping statistics --- 00:20:06.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.951 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:20:06.951 00:20:06.951 --- 10.0.0.1 ping statistics --- 00:20:06.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.951 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=967198 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 967198 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 967198 ']' 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:06.951 11:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.951 [2024-07-12 11:56:56.354743] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:06.951 [2024-07-12 11:56:56.354817] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.951 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.951 [2024-07-12 11:56:56.420967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.208 [2024-07-12 11:56:56.529332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.208 [2024-07-12 11:56:56.529377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.208 [2024-07-12 11:56:56.529404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.208 [2024-07-12 11:56:56.529415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.208 [2024-07-12 11:56:56.529424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.208 [2024-07-12 11:56:56.529501] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.208 [2024-07-12 11:56:56.529558] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.208 [2024-07-12 11:56:56.529623] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.208 [2024-07-12 11:56:56.529626] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 [2024-07-12 11:56:57.336962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.140 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 Malloc1 00:20:08.140 [2024-07-12 11:56:57.426302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.140 Malloc2 00:20:08.140 Malloc3 00:20:08.140 Malloc4 00:20:08.140 Malloc5 00:20:08.396 Malloc6 00:20:08.396 Malloc7 00:20:08.396 Malloc8 00:20:08.396 Malloc9 00:20:08.396 Malloc10 00:20:08.396 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.396 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:08.396 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:08.396 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=967381 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 967381 /var/tmp/bdevperf.sock 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 967381 ']' 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.654 { 00:20:08.654 "params": { 00:20:08.654 "name": "Nvme$subsystem", 00:20:08.654 "trtype": "$TEST_TRANSPORT", 00:20:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.654 "adrfam": "ipv4", 00:20:08.654 "trsvcid": "$NVMF_PORT", 00:20:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.654 "hdgst": ${hdgst:-false}, 00:20:08.654 "ddgst": ${ddgst:-false} 00:20:08.654 }, 00:20:08.654 "method": "bdev_nvme_attach_controller" 00:20:08.654 } 00:20:08.654 EOF 00:20:08.654 )") 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.654 { 00:20:08.654 "params": { 00:20:08.654 "name": "Nvme$subsystem", 00:20:08.654 "trtype": "$TEST_TRANSPORT", 00:20:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.654 "adrfam": "ipv4", 00:20:08.654 "trsvcid": "$NVMF_PORT", 00:20:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.654 "hdgst": ${hdgst:-false}, 00:20:08.654 "ddgst": ${ddgst:-false} 00:20:08.654 }, 00:20:08.654 "method": "bdev_nvme_attach_controller" 00:20:08.654 } 00:20:08.654 EOF 00:20:08.654 )") 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.654 { 00:20:08.654 "params": { 00:20:08.654 "name": "Nvme$subsystem", 00:20:08.654 "trtype": "$TEST_TRANSPORT", 00:20:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.654 "adrfam": "ipv4", 00:20:08.654 "trsvcid": "$NVMF_PORT", 00:20:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.654 "hdgst": ${hdgst:-false}, 00:20:08.654 "ddgst": ${ddgst:-false} 00:20:08.654 }, 00:20:08.654 "method": "bdev_nvme_attach_controller" 00:20:08.654 } 00:20:08.654 EOF 00:20:08.654 )") 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.654 { 00:20:08.654 "params": { 00:20:08.654 "name": "Nvme$subsystem", 00:20:08.654 "trtype": "$TEST_TRANSPORT", 00:20:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.654 "adrfam": "ipv4", 00:20:08.654 "trsvcid": "$NVMF_PORT", 00:20:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.654 "hdgst": ${hdgst:-false}, 00:20:08.654 "ddgst": ${ddgst:-false} 00:20:08.654 }, 00:20:08.654 "method": "bdev_nvme_attach_controller" 00:20:08.654 } 00:20:08.654 EOF 00:20:08.654 )") 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.654 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.654 { 00:20:08.654 "params": { 00:20:08.654 "name": "Nvme$subsystem", 00:20:08.654 "trtype": "$TEST_TRANSPORT", 00:20:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.654 "adrfam": "ipv4", 00:20:08.654 "trsvcid": "$NVMF_PORT", 00:20:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.655 { 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme$subsystem", 00:20:08.655 "trtype": "$TEST_TRANSPORT", 00:20:08.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "$NVMF_PORT", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.655 { 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme$subsystem", 00:20:08.655 "trtype": "$TEST_TRANSPORT", 00:20:08.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "$NVMF_PORT", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.655 { 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme$subsystem", 00:20:08.655 "trtype": "$TEST_TRANSPORT", 00:20:08.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "$NVMF_PORT", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.655 { 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme$subsystem", 00:20:08.655 "trtype": "$TEST_TRANSPORT", 00:20:08.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "$NVMF_PORT", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.655 { 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme$subsystem", 00:20:08.655 "trtype": "$TEST_TRANSPORT", 00:20:08.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "$NVMF_PORT", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.655 "hdgst": ${hdgst:-false}, 00:20:08.655 "ddgst": ${ddgst:-false} 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 } 00:20:08.655 EOF 00:20:08.655 )") 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:08.655 11:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme1", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme2", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme3", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme4", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme5", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme6", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme7", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme8", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme9", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 },{ 00:20:08.655 "params": { 00:20:08.655 "name": "Nvme10", 00:20:08.655 "trtype": "tcp", 00:20:08.655 "traddr": "10.0.0.2", 00:20:08.655 "adrfam": "ipv4", 00:20:08.655 "trsvcid": "4420", 00:20:08.655 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.655 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.655 "hdgst": false, 00:20:08.655 "ddgst": false 00:20:08.655 }, 00:20:08.655 "method": "bdev_nvme_attach_controller" 00:20:08.655 }' 00:20:08.655 [2024-07-12 11:56:57.941380] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:08.656 [2024-07-12 11:56:57.941452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:08.656 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.656 [2024-07-12 11:56:58.005616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.656 [2024-07-12 11:56:58.115924] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.548 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:10.548 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:20:10.548 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 967381 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:10.549 11:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:11.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 967381 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 967198 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.505 { 00:20:11.505 "params": { 00:20:11.505 "name": "Nvme$subsystem", 00:20:11.505 "trtype": "$TEST_TRANSPORT", 00:20:11.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.505 "adrfam": "ipv4", 00:20:11.505 "trsvcid": "$NVMF_PORT", 00:20:11.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.505 "hdgst": ${hdgst:-false}, 00:20:11.505 "ddgst": ${ddgst:-false} 00:20:11.505 }, 00:20:11.505 "method": "bdev_nvme_attach_controller" 00:20:11.505 } 00:20:11.505 EOF 00:20:11.505 )") 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.505 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.506 { 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme$subsystem", 00:20:11.506 "trtype": "$TEST_TRANSPORT", 00:20:11.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "$NVMF_PORT", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.506 "hdgst": ${hdgst:-false}, 00:20:11.506 "ddgst": ${ddgst:-false} 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 } 00:20:11.506 EOF 00:20:11.506 )") 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.506 { 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme$subsystem", 00:20:11.506 "trtype": "$TEST_TRANSPORT", 00:20:11.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "$NVMF_PORT", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.506 "hdgst": ${hdgst:-false}, 00:20:11.506 "ddgst": ${ddgst:-false} 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 } 00:20:11.506 EOF 00:20:11.506 )") 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.506 { 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme$subsystem", 00:20:11.506 "trtype": "$TEST_TRANSPORT", 00:20:11.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "$NVMF_PORT", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.506 "hdgst": ${hdgst:-false}, 00:20:11.506 "ddgst": ${ddgst:-false} 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 } 00:20:11.506 EOF 00:20:11.506 )") 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:11.506 11:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme1", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme2", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme3", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme4", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme5", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme6", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme7", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme8", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme9", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 },{ 00:20:11.506 "params": { 00:20:11.506 "name": "Nvme10", 00:20:11.506 "trtype": "tcp", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "adrfam": "ipv4", 00:20:11.506 "trsvcid": "4420", 00:20:11.506 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.506 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.506 "hdgst": false, 00:20:11.506 "ddgst": false 00:20:11.506 }, 00:20:11.506 "method": "bdev_nvme_attach_controller" 00:20:11.506 }' 00:20:11.506 [2024-07-12 11:57:00.932943] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:11.506 [2024-07-12 11:57:00.933025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid967855 ] 00:20:11.506 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.506 [2024-07-12 11:57:00.998500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.763 [2024-07-12 11:57:01.110025] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.131 Running I/O for 1 seconds... 00:20:14.503 00:20:14.503 Latency(us) 00:20:14.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme1n1 : 1.14 223.67 13.98 0.00 0.00 283308.94 24466.77 257872.02 00:20:14.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme2n1 : 1.14 224.91 14.06 0.00 0.00 277060.84 21068.61 259425.47 00:20:14.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme3n1 : 1.11 267.74 16.73 0.00 0.00 221718.16 12718.84 251658.24 00:20:14.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme4n1 : 1.10 236.76 14.80 0.00 0.00 252050.48 5388.52 265639.25 00:20:14.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme5n1 : 1.18 219.68 13.73 0.00 0.00 269745.15 3762.25 264085.81 00:20:14.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme6n1 : 1.19 214.87 13.43 0.00 0.00 271730.35 20097.71 285834.05 00:20:14.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme7n1 : 1.15 231.66 14.48 0.00 0.00 242852.58 7378.87 225249.66 00:20:14.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme8n1 : 1.20 267.58 16.72 0.00 0.00 210922.91 15146.10 254765.13 00:20:14.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme9n1 : 1.16 225.85 14.12 0.00 0.00 243641.18 1529.17 246997.90 00:20:14.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.503 Verification LBA range: start 0x0 length 0x400 00:20:14.503 Nvme10n1 : 1.20 266.15 16.63 0.00 0.00 205005.03 6505.05 267192.70 00:20:14.503 =================================================================================================================== 00:20:14.503 Total : 2378.87 148.68 0.00 0.00 245583.11 1529.17 285834.05 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.775 rmmod nvme_tcp 00:20:14.775 rmmod nvme_fabrics 00:20:14.775 rmmod nvme_keyring 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 967198 ']' 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 967198 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 967198 ']' 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 967198 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 967198 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 967198' 00:20:14.775 killing process with pid 967198 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 967198 00:20:14.775 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 967198 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.354 11:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.883 00:20:17.883 real 0m12.765s 00:20:17.883 user 0m37.762s 00:20:17.883 sys 0m3.313s 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:17.883 ************************************ 00:20:17.883 END TEST nvmf_shutdown_tc1 00:20:17.883 ************************************ 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.883 ************************************ 00:20:17.883 START TEST nvmf_shutdown_tc2 00:20:17.883 ************************************ 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.883 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:17.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:17.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:17.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:17.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.884 11:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:20:17.884 00:20:17.884 --- 10.0.0.2 ping statistics --- 00:20:17.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.884 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:20:17.884 00:20:17.884 --- 10.0.0.1 ping statistics --- 00:20:17.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.884 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=968684 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 968684 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 968684 ']' 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:17.884 11:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 [2024-07-12 11:57:07.114152] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:17.884 [2024-07-12 11:57:07.114248] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.884 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.884 [2024-07-12 11:57:07.183321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.884 [2024-07-12 11:57:07.302687] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.884 [2024-07-12 11:57:07.302752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.884 [2024-07-12 11:57:07.302768] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.884 [2024-07-12 11:57:07.302782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.884 [2024-07-12 11:57:07.302794] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.884 [2024-07-12 11:57:07.302885] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.884 [2024-07-12 11:57:07.302959] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.884 [2024-07-12 11:57:07.303008] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.884 [2024-07-12 11:57:07.303011] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.815 [2024-07-12 11:57:08.073813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.815 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.816 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.816 Malloc1 00:20:18.816 [2024-07-12 11:57:08.154742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.816 Malloc2 00:20:18.816 Malloc3 00:20:18.816 Malloc4 00:20:19.094 Malloc5 00:20:19.094 Malloc6 00:20:19.094 Malloc7 00:20:19.094 Malloc8 00:20:19.094 Malloc9 00:20:19.094 Malloc10 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=969350 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 969350 /var/tmp/bdevperf.sock 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 969350 ']' 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.352 { 00:20:19.352 "params": { 00:20:19.352 "name": "Nvme$subsystem", 00:20:19.352 "trtype": "$TEST_TRANSPORT", 00:20:19.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.352 "adrfam": "ipv4", 00:20:19.352 "trsvcid": "$NVMF_PORT", 00:20:19.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.352 "hdgst": ${hdgst:-false}, 00:20:19.352 "ddgst": ${ddgst:-false} 00:20:19.352 }, 00:20:19.352 "method": "bdev_nvme_attach_controller" 00:20:19.352 } 00:20:19.352 EOF 00:20:19.352 )") 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.352 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.352 { 00:20:19.352 "params": { 00:20:19.352 "name": "Nvme$subsystem", 00:20:19.352 "trtype": "$TEST_TRANSPORT", 00:20:19.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.352 "adrfam": "ipv4", 00:20:19.352 "trsvcid": "$NVMF_PORT", 00:20:19.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.352 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.353 { 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme$subsystem", 00:20:19.353 "trtype": "$TEST_TRANSPORT", 00:20:19.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "$NVMF_PORT", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.353 "hdgst": ${hdgst:-false}, 00:20:19.353 "ddgst": ${ddgst:-false} 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 } 00:20:19.353 EOF 00:20:19.353 )") 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:19.353 11:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme1", 00:20:19.353 "trtype": "tcp", 00:20:19.353 "traddr": "10.0.0.2", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "4420", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.353 "hdgst": false, 00:20:19.353 "ddgst": false 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 },{ 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme2", 00:20:19.353 "trtype": "tcp", 00:20:19.353 "traddr": "10.0.0.2", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "4420", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.353 "hdgst": false, 00:20:19.353 "ddgst": false 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 },{ 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme3", 00:20:19.353 "trtype": "tcp", 00:20:19.353 "traddr": "10.0.0.2", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "4420", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:19.353 "hdgst": false, 00:20:19.353 "ddgst": false 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 },{ 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme4", 00:20:19.353 "trtype": "tcp", 00:20:19.353 "traddr": "10.0.0.2", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "4420", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:19.353 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:19.353 "hdgst": false, 00:20:19.353 "ddgst": false 00:20:19.353 }, 00:20:19.353 "method": "bdev_nvme_attach_controller" 00:20:19.353 },{ 00:20:19.353 "params": { 00:20:19.353 "name": "Nvme5", 00:20:19.353 "trtype": "tcp", 00:20:19.353 "traddr": "10.0.0.2", 00:20:19.353 "adrfam": "ipv4", 00:20:19.353 "trsvcid": "4420", 00:20:19.353 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 },{ 00:20:19.354 "params": { 00:20:19.354 "name": "Nvme6", 00:20:19.354 "trtype": "tcp", 00:20:19.354 "traddr": "10.0.0.2", 00:20:19.354 "adrfam": "ipv4", 00:20:19.354 "trsvcid": "4420", 00:20:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 },{ 00:20:19.354 "params": { 00:20:19.354 "name": "Nvme7", 00:20:19.354 "trtype": "tcp", 00:20:19.354 "traddr": "10.0.0.2", 00:20:19.354 "adrfam": "ipv4", 00:20:19.354 "trsvcid": "4420", 00:20:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 },{ 00:20:19.354 "params": { 00:20:19.354 "name": "Nvme8", 00:20:19.354 "trtype": "tcp", 00:20:19.354 "traddr": "10.0.0.2", 00:20:19.354 "adrfam": "ipv4", 00:20:19.354 "trsvcid": "4420", 00:20:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 },{ 00:20:19.354 "params": { 00:20:19.354 "name": "Nvme9", 00:20:19.354 "trtype": "tcp", 00:20:19.354 "traddr": "10.0.0.2", 00:20:19.354 "adrfam": "ipv4", 00:20:19.354 "trsvcid": "4420", 00:20:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 },{ 00:20:19.354 "params": { 00:20:19.354 "name": "Nvme10", 00:20:19.354 "trtype": "tcp", 00:20:19.354 "traddr": "10.0.0.2", 00:20:19.354 "adrfam": "ipv4", 00:20:19.354 "trsvcid": "4420", 00:20:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:19.354 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:19.354 "hdgst": false, 00:20:19.354 "ddgst": false 00:20:19.354 }, 00:20:19.354 "method": "bdev_nvme_attach_controller" 00:20:19.354 }' 00:20:19.354 [2024-07-12 11:57:08.664525] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:19.354 [2024-07-12 11:57:08.664598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969350 ] 00:20:19.354 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.354 [2024-07-12 11:57:08.727594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.354 [2024-07-12 11:57:08.840092] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.249 Running I/O for 10 seconds... 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:21.249 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:21.507 11:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 969350 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 969350 ']' 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 969350 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:21.765 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 969350 00:20:22.023 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:22.023 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:22.023 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 969350' 00:20:22.023 killing process with pid 969350 00:20:22.023 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 969350 00:20:22.023 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 969350 00:20:22.023 Received shutdown signal, test time was about 0.978111 seconds 00:20:22.023 00:20:22.023 Latency(us) 00:20:22.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme1n1 : 0.93 206.29 12.89 0.00 0.00 306641.22 22136.60 251658.24 00:20:22.023 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme2n1 : 0.95 202.36 12.65 0.00 0.00 305019.26 22233.69 254765.13 00:20:22.023 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme3n1 : 0.96 269.55 16.85 0.00 0.00 225380.64 2366.58 257872.02 00:20:22.023 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme4n1 : 0.95 273.49 17.09 0.00 0.00 216912.63 10728.49 246997.90 00:20:22.023 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme5n1 : 0.97 264.00 16.50 0.00 0.00 221274.83 18252.99 256318.58 00:20:22.023 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme6n1 : 0.98 261.96 16.37 0.00 0.00 218627.79 19709.35 254765.13 00:20:22.023 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme7n1 : 0.97 262.90 16.43 0.00 0.00 213217.28 20194.80 250104.79 00:20:22.023 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme8n1 : 0.95 206.84 12.93 0.00 0.00 262251.00 6310.87 253211.69 00:20:22.023 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme9n1 : 0.95 209.34 13.08 0.00 0.00 253229.58 9320.68 257872.02 00:20:22.023 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.023 Verification LBA range: start 0x0 length 0x400 00:20:22.023 Nvme10n1 : 0.96 200.26 12.52 0.00 0.00 261880.54 21262.79 285834.05 00:20:22.023 =================================================================================================================== 00:20:22.023 Total : 2356.98 147.31 0.00 0.00 244210.88 2366.58 285834.05 00:20:22.281 11:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 968684 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.212 rmmod nvme_tcp 00:20:23.212 rmmod nvme_fabrics 00:20:23.212 rmmod nvme_keyring 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 968684 ']' 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 968684 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 968684 ']' 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 968684 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:23.212 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 968684 00:20:23.470 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:23.470 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:23.470 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 968684' 00:20:23.470 killing process with pid 968684 00:20:23.470 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 968684 00:20:23.470 11:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 968684 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.038 11:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.936 00:20:25.936 real 0m8.455s 00:20:25.936 user 0m26.226s 00:20:25.936 sys 0m1.525s 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 ************************************ 00:20:25.936 END TEST nvmf_shutdown_tc2 00:20:25.936 ************************************ 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 ************************************ 00:20:25.936 START TEST nvmf_shutdown_tc3 00:20:25.936 ************************************ 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:25.936 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:25.936 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.936 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:25.937 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:25.937 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.937 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:26.195 00:20:26.195 --- 10.0.0.2 ping statistics --- 00:20:26.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.195 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:26.195 00:20:26.195 --- 10.0.0.1 ping statistics --- 00:20:26.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.195 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=970411 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 970411 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 970411 ']' 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:26.195 11:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 [2024-07-12 11:57:15.599509] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:26.195 [2024-07-12 11:57:15.599593] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.195 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.195 [2024-07-12 11:57:15.662814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.452 [2024-07-12 11:57:15.771126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.452 [2024-07-12 11:57:15.771190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.452 [2024-07-12 11:57:15.771204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.452 [2024-07-12 11:57:15.771214] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.452 [2024-07-12 11:57:15.771224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.452 [2024-07-12 11:57:15.771346] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.452 [2024-07-12 11:57:15.771410] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.452 [2024-07-12 11:57:15.771438] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.452 [2024-07-12 11:57:15.771440] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.382 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:27.382 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:20:27.382 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.382 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:27.382 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.383 [2024-07-12 11:57:16.603798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.383 11:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.383 Malloc1 00:20:27.383 [2024-07-12 11:57:16.693584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.383 Malloc2 00:20:27.383 Malloc3 00:20:27.383 Malloc4 00:20:27.383 Malloc5 00:20:27.640 Malloc6 00:20:27.640 Malloc7 00:20:27.640 Malloc8 00:20:27.640 Malloc9 00:20:27.640 Malloc10 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=970597 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 970597 /var/tmp/bdevperf.sock 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 970597 ']' 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.898 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.899 } 00:20:27.899 EOF 00:20:27.899 )") 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.899 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.899 { 00:20:27.899 "params": { 00:20:27.899 "name": "Nvme$subsystem", 00:20:27.899 "trtype": "$TEST_TRANSPORT", 00:20:27.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.899 "adrfam": "ipv4", 00:20:27.899 "trsvcid": "$NVMF_PORT", 00:20:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.899 "hdgst": ${hdgst:-false}, 00:20:27.899 "ddgst": ${ddgst:-false} 00:20:27.899 }, 00:20:27.899 "method": "bdev_nvme_attach_controller" 00:20:27.900 } 00:20:27.900 EOF 00:20:27.900 )") 00:20:27.900 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:27.900 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:27.900 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:27.900 11:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme1", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme2", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme3", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme4", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme5", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme6", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme7", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme8", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme9", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 },{ 00:20:27.900 "params": { 00:20:27.900 "name": "Nvme10", 00:20:27.900 "trtype": "tcp", 00:20:27.900 "traddr": "10.0.0.2", 00:20:27.900 "adrfam": "ipv4", 00:20:27.900 "trsvcid": "4420", 00:20:27.900 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:27.900 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:27.900 "hdgst": false, 00:20:27.900 "ddgst": false 00:20:27.900 }, 00:20:27.900 "method": "bdev_nvme_attach_controller" 00:20:27.900 }' 00:20:27.900 [2024-07-12 11:57:17.214444] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:27.900 [2024-07-12 11:57:17.214516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970597 ] 00:20:27.900 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.900 [2024-07-12 11:57:17.278003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.900 [2024-07-12 11:57:17.387997] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.796 Running I/O for 10 seconds... 00:20:29.796 11:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:29.796 11:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:20:29.796 11:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:29.796 11:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.796 11:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:29.796 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:30.054 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 970411 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 970411 ']' 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 970411 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 970411 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 970411' 00:20:30.320 killing process with pid 970411 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 970411 00:20:30.320 11:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 970411 00:20:30.320 [2024-07-12 11:57:19.772019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772430] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772533] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772551] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.320 [2024-07-12 11:57:19.772929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.772943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.772956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.772968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.772980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.773206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56150 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.775982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776000] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776408] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776438] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.776516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b58b30 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.777998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.321 [2024-07-12 11:57:19.778247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778259] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778639] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778701] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778773] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.778826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b565f0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.779750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.779792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.779810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.779825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.779839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.779889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.779902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.779915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300780 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.779970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.779996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.780012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.780025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.780039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.780053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.780067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.322 [2024-07-12 11:57:19.780081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.322 [2024-07-12 11:57:19.780094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b46d0 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.780267] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.322 [2024-07-12 11:57:19.782142] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.322 [2024-07-12 11:57:19.782658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782746] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.322 [2024-07-12 11:57:19.782758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782795] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.782993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.322 [2024-07-12 11:57:19.783090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56a90 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784380] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784646] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.784990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785174] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with [2024-07-12 11:57:19.785173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:30.323 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with [2024-07-12 11:57:19.785238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(5) to be set 00:20:30.323 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785253] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with [2024-07-12 11:57:19.785303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(5) to be set 00:20:30.323 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-07-12 11:57:19.785343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56f50 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 the state(5) to be set 00:20:30.323 [2024-07-12 11:57:19.785358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.323 [2024-07-12 11:57:19.785526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.323 [2024-07-12 11:57:19.785540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.785983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.785997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.324 [2024-07-12 11:57:19.786577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.324 [2024-07-12 11:57:19.786593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.786980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.325 [2024-07-12 11:57:19.787217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.325 [2024-07-12 11:57:19.787241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2441b00 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787333] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2441b00 was disconnected and freed. reset controller. 00:20:30.325 [2024-07-12 11:57:19.787592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787635] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787782] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.787992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788223] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788253] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.325 [2024-07-12 11:57:19.788264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788393] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.788443] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b573f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.789231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:30.326 [2024-07-12 11:57:19.789298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3f0 (9): Bad file descriptor 00:20:30.326 [2024-07-12 11:57:19.790479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300780 (9): Bad file descriptor 00:20:30.326 [2024-07-12 11:57:19.790520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b46d0 (9): Bad file descriptor 00:20:30.326 [2024-07-12 11:57:19.790582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f4b0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.790748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca470 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.790951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.790972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.790987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.791001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.791015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.791028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.791042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.326 [2024-07-12 11:57:19.791056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.791069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe610 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.791903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.791929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.791951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.791967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.791983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.791998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.792028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.792044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.792066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.792108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.792124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.326 [2024-07-12 11:57:19.792138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.326 [2024-07-12 11:57:19.792153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24430b0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.792249] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24430b0 was disconnected and freed. reset controller. 00:20:30.326 [2024-07-12 11:57:19.792714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.326 [2024-07-12 11:57:19.792744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3f0 with addr=10.0.0.2, port=4420 00:20:30.326 [2024-07-12 11:57:19.792761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3f0 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.793992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with [2024-07-12 11:57:19.793998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controllethe state(5) to be set 00:20:30.326 r 00:20:30.326 [2024-07-12 11:57:19.794027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f4b0 (9): Bad file descriptor 00:20:30.326 [2024-07-12 11:57:19.794042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3f0 (9): Bad file descriptor 00:20:30.326 [2024-07-12 11:57:19.794067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.326 [2024-07-12 11:57:19.794326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:30.327 [2024-07-12 11:57:19.794451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] contr[2024-07-12 11:57:19.794463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with oller reinitialization failed 00:20:30.327 the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:30.327 [2024-07-12 11:57:19.794492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794542] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794579] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794736] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794773] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794812] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.327 [2024-07-12 11:57:19.794822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.794886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57890 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.795077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.327 [2024-07-12 11:57:19.795180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.327 [2024-07-12 11:57:19.795208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f4b0 with addr=10.0.0.2, port=4420 00:20:30.327 [2024-07-12 11:57:19.795224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f4b0 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.795517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f4b0 (9): Bad file descriptor 00:20:30.327 [2024-07-12 11:57:19.795781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:30.327 [2024-07-12 11:57:19.795802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:30.327 [2024-07-12 11:57:19.795817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:30.327 [2024-07-12 11:57:19.796086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.327 [2024-07-12 11:57:19.796394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796515] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796634] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.327 [2024-07-12 11:57:19.796645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796684] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796727] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.327 [2024-07-12 11:57:19.796994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.797404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57d30 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798821] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798873] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.798992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799263] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.328 [2024-07-12 11:57:19.799412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b581f0 is same with the state(5) to be set 00:20:30.329 [2024-07-12 11:57:19.799510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.799970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.799986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.329 [2024-07-12 11:57:19.800878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.329 [2024-07-12 11:57:19.800894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.800909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.800924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.800939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.800954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.800968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.800984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.800998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.330 [2024-07-12 11:57:19.801606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.801621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f9cf0 is same with the state(5) to be set 00:20:30.330 [2024-07-12 11:57:19.801695] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f9cf0 was disconnected and freed. reset controller. 00:20:30.330 [2024-07-12 11:57:19.802213] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22fc4e0 was disconnected and freed. reset controller. 00:20:30.330 [2024-07-12 11:57:19.802283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca470 (9): Bad file descriptor 00:20:30.330 [2024-07-12 11:57:19.802341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4af0 is same with the state(5) to be set 00:20:30.330 [2024-07-12 11:57:19.802515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1310 is same with the state(5) to be set 00:20:30.330 [2024-07-12 11:57:19.802677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4910 is same with the state(5) to be set 00:20:30.330 [2024-07-12 11:57:19.802820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe610 (9): Bad file descriptor 00:20:30.330 [2024-07-12 11:57:19.802876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.330 [2024-07-12 11:57:19.802985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.330 [2024-07-12 11:57:19.802998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55ad0 is same with the state(5) to be set 00:20:30.330 [2024-07-12 11:57:19.804327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:30.330 [2024-07-12 11:57:19.804360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4910 (9): Bad file descriptor 00:20:30.330 [2024-07-12 11:57:19.804430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.804976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.804992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.331 [2024-07-12 11:57:19.805395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.331 [2024-07-12 11:57:19.805411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.596 [2024-07-12 11:57:19.805425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.596 [2024-07-12 11:57:19.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.596 [2024-07-12 11:57:19.805465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.596 [2024-07-12 11:57:19.805481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.596 [2024-07-12 11:57:19.805495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.596 [2024-07-12 11:57:19.805511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.596 [2024-07-12 11:57:19.805524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.805979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.805993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.806512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.806529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a6e20 is same with the state(5) to be set 00:20:30.597 [2024-07-12 11:57:19.807949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.807974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.807996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.597 [2024-07-12 11:57:19.808204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.597 [2024-07-12 11:57:19.808221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.808973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.598 [2024-07-12 11:57:19.809557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.598 [2024-07-12 11:57:19.809574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.809995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.810011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f4c0 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.811713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:30.599 [2024-07-12 11:57:19.811747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:30.599 [2024-07-12 11:57:19.811768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.599 [2024-07-12 11:57:19.811785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:30.599 [2024-07-12 11:57:19.811846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c1310 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.812539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.599 [2024-07-12 11:57:19.812571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b4910 with addr=10.0.0.2, port=4420 00:20:30.599 [2024-07-12 11:57:19.812589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4910 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.812687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.599 [2024-07-12 11:57:19.812714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3f0 with addr=10.0.0.2, port=4420 00:20:30.599 [2024-07-12 11:57:19.812730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3f0 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.812827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.599 [2024-07-12 11:57:19.812853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300780 with addr=10.0.0.2, port=4420 00:20:30.599 [2024-07-12 11:57:19.812876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300780 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.812961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.599 [2024-07-12 11:57:19.812987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b46d0 with addr=10.0.0.2, port=4420 00:20:30.599 [2024-07-12 11:57:19.813002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b46d0 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.813046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4af0 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.813093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55ad0 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.813479] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:30.599 [2024-07-12 11:57:19.813768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:30.599 [2024-07-12 11:57:19.813904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.599 [2024-07-12 11:57:19.813939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c1310 with addr=10.0.0.2, port=4420 00:20:30.599 [2024-07-12 11:57:19.813955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1310 is same with the state(5) to be set 00:20:30.599 [2024-07-12 11:57:19.813973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4910 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.813992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3f0 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.814009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300780 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.814026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b46d0 (9): Bad file descriptor 00:20:30.599 [2024-07-12 11:57:19.814104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.599 [2024-07-12 11:57:19.814620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-07-12 11:57:19.814636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.814973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.814987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.600 [2024-07-12 11:57:19.815978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.600 [2024-07-12 11:57:19.815993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.816027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.816058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.816088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.816159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.816173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a8000 is same with the state(5) to be set 00:20:30.601 [2024-07-12 11:57:19.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.817983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.817997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.601 [2024-07-12 11:57:19.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.601 [2024-07-12 11:57:19.818398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.818972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.818986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.819487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.819501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24445d0 is same with the state(5) to be set 00:20:30.602 [2024-07-12 11:57:19.820787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.820832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.820871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.820904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.820935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.820965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.820979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.821000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.602 [2024-07-12 11:57:19.821015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.602 [2024-07-12 11:57:19.821031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.821978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.821992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.603 [2024-07-12 11:57:19.822350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-07-12 11:57:19.822367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.822600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.822615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb210 is same with the state(5) to be set 00:20:30.604 [2024-07-12 11:57:19.822705] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22fb210 was disconnected and freed. reset controller. 00:20:30.604 [2024-07-12 11:57:19.822759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:30.604 [2024-07-12 11:57:19.822784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:30.604 [2024-07-12 11:57:19.823030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.604 [2024-07-12 11:57:19.823060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f4b0 with addr=10.0.0.2, port=4420 00:20:30.604 [2024-07-12 11:57:19.823077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f4b0 is same with the state(5) to be set 00:20:30.604 [2024-07-12 11:57:19.823103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c1310 (9): Bad file descriptor 00:20:30.604 [2024-07-12 11:57:19.823122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:30.604 [2024-07-12 11:57:19.823136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:30.604 [2024-07-12 11:57:19.823160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:30.604 [2024-07-12 11:57:19.823181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:30.604 [2024-07-12 11:57:19.823196] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:30.604 [2024-07-12 11:57:19.823220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:30.604 [2024-07-12 11:57:19.823238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.604 [2024-07-12 11:57:19.823251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:30.604 [2024-07-12 11:57:19.823265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.604 [2024-07-12 11:57:19.823283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:30.604 [2024-07-12 11:57:19.823297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:30.604 [2024-07-12 11:57:19.823310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:30.604 [2024-07-12 11:57:19.823361] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.823385] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.823404] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.823427] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.823446] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.824642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.604 [2024-07-12 11:57:19.824667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.604 [2024-07-12 11:57:19.824686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.604 [2024-07-12 11:57:19.824699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.604 [2024-07-12 11:57:19.824723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:30.604 [2024-07-12 11:57:19.824845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.604 [2024-07-12 11:57:19.824879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fe610 with addr=10.0.0.2, port=4420 00:20:30.604 [2024-07-12 11:57:19.824898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe610 is same with the state(5) to be set 00:20:30.604 [2024-07-12 11:57:19.825005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.604 [2024-07-12 11:57:19.825031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca470 with addr=10.0.0.2, port=4420 00:20:30.604 [2024-07-12 11:57:19.825047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca470 is same with the state(5) to be set 00:20:30.604 [2024-07-12 11:57:19.825066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f4b0 (9): Bad file descriptor 00:20:30.604 [2024-07-12 11:57:19.825083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:30.604 [2024-07-12 11:57:19.825097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:30.604 [2024-07-12 11:57:19.825111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:30.604 [2024-07-12 11:57:19.825137] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.825167] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.604 [2024-07-12 11:57:19.825757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.825802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.825835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.825885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.825930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.825972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.825994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.604 [2024-07-12 11:57:19.826348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.604 [2024-07-12 11:57:19.826362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.826968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.605 [2024-07-12 11:57:19.827563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.605 [2024-07-12 11:57:19.827578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.606 [2024-07-12 11:57:19.827929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-07-12 11:57:19.827944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f87d0 is same with the state(5) to be set 00:20:30.606 [2024-07-12 11:57:19.829641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 task offset: 32128 on job bdev=Nvme3n1 fails 00:20:30.606 00:20:30.606 Latency(us) 00:20:30.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme1n1 : 0.87 147.75 9.23 73.87 0.00 285378.43 30098.01 254765.13 00:20:30.606 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme2n1 ended in about 0.88 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme2n1 : 0.88 146.13 9.13 73.06 0.00 282473.81 22913.33 246997.90 00:20:30.606 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme3n1 ended in about 0.85 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme3n1 : 0.85 226.53 14.16 75.51 0.00 200126.77 7524.50 242337.56 00:20:30.606 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme4n1 ended in about 0.85 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme4n1 : 0.85 222.92 13.93 8.21 0.00 254888.59 13786.83 242337.56 00:20:30.606 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme5n1 ended in about 0.88 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme5n1 : 0.88 145.58 9.10 72.79 0.00 265323.96 21165.70 270299.59 00:20:30.606 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme6n1 ended in about 0.89 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme6n1 : 0.89 144.19 9.01 72.10 0.00 262154.11 21554.06 251658.24 00:20:30.606 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme7n1 ended in about 0.86 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme7n1 : 0.86 148.34 9.27 74.17 0.00 248013.05 20486.07 250104.79 00:20:30.606 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme8n1 ended in about 0.88 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme8n1 : 0.88 225.31 14.08 64.54 0.00 185693.49 16796.63 253211.69 00:20:30.606 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme9n1 : 0.86 223.79 13.99 0.00 0.00 234451.50 21359.88 246997.90 00:20:30.606 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.606 Job: Nvme10n1 ended in about 0.87 seconds with error 00:20:30.606 Verification LBA range: start 0x0 length 0x400 00:20:30.606 Nvme10n1 : 0.87 147.16 9.20 73.58 0.00 232655.08 20486.07 273406.48 00:20:30.606 =================================================================================================================== 00:20:30.606 Total : 1777.70 111.11 587.83 0.00 241884.77 7524.50 273406.48 00:20:30.606 [2024-07-12 11:57:19.857613] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:30.606 [2024-07-12 11:57:19.857700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:30.606 [2024-07-12 11:57:19.857984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.606 [2024-07-12 11:57:19.858019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b4af0 with addr=10.0.0.2, port=4420 00:20:30.606 [2024-07-12 11:57:19.858039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4af0 is same with the state(5) to be set 00:20:30.606 [2024-07-12 11:57:19.858067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe610 (9): Bad file descriptor 00:20:30.606 [2024-07-12 11:57:19.858091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca470 (9): Bad file descriptor 00:20:30.606 [2024-07-12 11:57:19.858109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:30.606 [2024-07-12 11:57:19.858123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:30.606 [2024-07-12 11:57:19.858139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:30.606 [2024-07-12 11:57:19.858588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 [2024-07-12 11:57:19.858743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.606 [2024-07-12 11:57:19.858773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55ad0 with addr=10.0.0.2, port=4420 00:20:30.606 [2024-07-12 11:57:19.858790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55ad0 is same with the state(5) to be set 00:20:30.606 [2024-07-12 11:57:19.858810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4af0 (9): Bad file descriptor 00:20:30.606 [2024-07-12 11:57:19.858828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:30.606 [2024-07-12 11:57:19.858851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:30.606 [2024-07-12 11:57:19.858876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:30.606 [2024-07-12 11:57:19.858899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:30.606 [2024-07-12 11:57:19.858919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:30.606 [2024-07-12 11:57:19.858932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:30.606 [2024-07-12 11:57:19.858989] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.606 [2024-07-12 11:57:19.859012] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.606 [2024-07-12 11:57:19.859056] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.606 [2024-07-12 11:57:19.859409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 [2024-07-12 11:57:19.859434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 [2024-07-12 11:57:19.859472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55ad0 (9): Bad file descriptor 00:20:30.606 [2024-07-12 11:57:19.859492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:30.606 [2024-07-12 11:57:19.859506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:30.606 [2024-07-12 11:57:19.859521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:30.606 [2024-07-12 11:57:19.859592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 [2024-07-12 11:57:19.859727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:30.606 [2024-07-12 11:57:19.859744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:30.606 [2024-07-12 11:57:19.859758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:30.606 [2024-07-12 11:57:19.859796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:30.606 [2024-07-12 11:57:19.859835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.606 [2024-07-12 11:57:19.859965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.606 [2024-07-12 11:57:19.859993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b46d0 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b46d0 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.607 [2024-07-12 11:57:19.860126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300780 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300780 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.607 [2024-07-12 11:57:19.860277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3f0 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3f0 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.607 [2024-07-12 11:57:19.860405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b4910 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4910 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.607 [2024-07-12 11:57:19.860535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c1310 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1310 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.607 [2024-07-12 11:57:19.860688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f4b0 with addr=10.0.0.2, port=4420 00:20:30.607 [2024-07-12 11:57:19.860704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f4b0 is same with the state(5) to be set 00:20:30.607 [2024-07-12 11:57:19.860723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b46d0 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300780 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3f0 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4910 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c1310 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f4b0 (9): Bad file descriptor 00:20:30.607 [2024-07-12 11:57:19.860859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.860882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.860897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:30.607 [2024-07-12 11:57:19.860922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.860937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.860950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.607 [2024-07-12 11:57:19.860966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.860979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.860993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:30.607 [2024-07-12 11:57:19.861010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.861025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.861043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:30.607 [2024-07-12 11:57:19.861060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.861074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.861086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:30.607 [2024-07-12 11:57:19.861125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.607 [2024-07-12 11:57:19.861143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.607 [2024-07-12 11:57:19.861166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.607 [2024-07-12 11:57:19.861177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.607 [2024-07-12 11:57:19.861188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.607 [2024-07-12 11:57:19.861201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:30.607 [2024-07-12 11:57:19.861213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:30.607 [2024-07-12 11:57:19.861230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:30.607 [2024-07-12 11:57:19.861265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:31.175 11:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:31.175 11:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 970597 00:20:32.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (970597) - No such process 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.176 rmmod nvme_tcp 00:20:32.176 rmmod nvme_fabrics 00:20:32.176 rmmod nvme_keyring 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.176 11:57:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.079 00:20:34.079 real 0m8.132s 00:20:34.079 user 0m20.908s 00:20:34.079 sys 0m1.488s 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.079 ************************************ 00:20:34.079 END TEST nvmf_shutdown_tc3 00:20:34.079 ************************************ 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:34.079 00:20:34.079 real 0m29.580s 00:20:34.079 user 1m24.994s 00:20:34.079 sys 0m6.471s 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:34.079 11:57:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:34.079 ************************************ 00:20:34.079 END TEST nvmf_shutdown 00:20:34.079 ************************************ 00:20:34.079 11:57:23 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:20:34.079 11:57:23 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:34.079 11:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.079 11:57:23 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:20:34.079 11:57:23 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:34.079 11:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.338 11:57:23 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:20:34.338 11:57:23 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:34.338 11:57:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:34.338 11:57:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:34.338 11:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.338 ************************************ 00:20:34.338 START TEST nvmf_multicontroller 00:20:34.338 ************************************ 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:34.338 * Looking for test storage... 00:20:34.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.338 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.339 11:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.240 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:36.241 00:20:36.241 --- 10.0.0.2 ping statistics --- 00:20:36.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.241 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:36.241 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:20:36.499 00:20:36.499 --- 10.0.0.1 ping statistics --- 00:20:36.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.499 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=973115 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 973115 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 973115 ']' 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:36.499 11:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.499 [2024-07-12 11:57:25.810831] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:36.499 [2024-07-12 11:57:25.810934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.499 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.499 [2024-07-12 11:57:25.879198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.756 [2024-07-12 11:57:25.994816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.756 [2024-07-12 11:57:25.994864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.756 [2024-07-12 11:57:25.994888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.756 [2024-07-12 11:57:25.994901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.756 [2024-07-12 11:57:25.994911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.756 [2024-07-12 11:57:25.994961] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.756 [2024-07-12 11:57:25.995016] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.756 [2024-07-12 11:57:25.995019] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.321 [2024-07-12 11:57:26.776226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.321 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 Malloc0 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 [2024-07-12 11:57:26.835818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 [2024-07-12 11:57:26.843749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 Malloc1 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=973272 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 973272 /var/tmp/bdevperf.sock 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 973272 ']' 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:37.579 11:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.837 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:37.837 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:20:37.837 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:37.837 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.837 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 NVMe0n1 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.095 1 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 request: 00:20:38.095 { 00:20:38.095 "name": "NVMe0", 00:20:38.095 "trtype": "tcp", 00:20:38.095 "traddr": "10.0.0.2", 00:20:38.095 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:38.095 "hostaddr": "10.0.0.2", 00:20:38.095 "hostsvcid": "60000", 00:20:38.095 "adrfam": "ipv4", 00:20:38.095 "trsvcid": "4420", 00:20:38.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.095 "method": "bdev_nvme_attach_controller", 00:20:38.095 "req_id": 1 00:20:38.095 } 00:20:38.095 Got JSON-RPC error response 00:20:38.095 response: 00:20:38.095 { 00:20:38.095 "code": -114, 00:20:38.095 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:38.095 } 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 request: 00:20:38.095 { 00:20:38.095 "name": "NVMe0", 00:20:38.095 "trtype": "tcp", 00:20:38.095 "traddr": "10.0.0.2", 00:20:38.095 "hostaddr": "10.0.0.2", 00:20:38.095 "hostsvcid": "60000", 00:20:38.095 "adrfam": "ipv4", 00:20:38.095 "trsvcid": "4420", 00:20:38.095 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.095 "method": "bdev_nvme_attach_controller", 00:20:38.095 "req_id": 1 00:20:38.095 } 00:20:38.095 Got JSON-RPC error response 00:20:38.095 response: 00:20:38.095 { 00:20:38.095 "code": -114, 00:20:38.095 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:38.095 } 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 request: 00:20:38.095 { 00:20:38.095 "name": "NVMe0", 00:20:38.095 "trtype": "tcp", 00:20:38.095 "traddr": "10.0.0.2", 00:20:38.095 "hostaddr": "10.0.0.2", 00:20:38.095 "hostsvcid": "60000", 00:20:38.095 "adrfam": "ipv4", 00:20:38.095 "trsvcid": "4420", 00:20:38.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.095 "multipath": "disable", 00:20:38.095 "method": "bdev_nvme_attach_controller", 00:20:38.095 "req_id": 1 00:20:38.095 } 00:20:38.095 Got JSON-RPC error response 00:20:38.095 response: 00:20:38.095 { 00:20:38.095 "code": -114, 00:20:38.095 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:38.095 } 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:38.095 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 request: 00:20:38.096 { 00:20:38.096 "name": "NVMe0", 00:20:38.096 "trtype": "tcp", 00:20:38.096 "traddr": "10.0.0.2", 00:20:38.096 "hostaddr": "10.0.0.2", 00:20:38.096 "hostsvcid": "60000", 00:20:38.096 "adrfam": "ipv4", 00:20:38.096 "trsvcid": "4420", 00:20:38.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.096 "multipath": "failover", 00:20:38.096 "method": "bdev_nvme_attach_controller", 00:20:38.096 "req_id": 1 00:20:38.096 } 00:20:38.096 Got JSON-RPC error response 00:20:38.096 response: 00:20:38.096 { 00:20:38.096 "code": -114, 00:20:38.096 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:38.096 } 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.096 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.352 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.352 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.352 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.609 11:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.609 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:38.610 11:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.568 0 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 973272 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 973272 ']' 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 973272 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:39.568 11:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 973272 00:20:39.568 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:39.568 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:39.568 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 973272' 00:20:39.568 killing process with pid 973272 00:20:39.568 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 973272 00:20:39.568 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 973272 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:20:39.826 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:39.826 [2024-07-12 11:57:26.944842] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:39.826 [2024-07-12 11:57:26.944943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973272 ] 00:20:39.826 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.826 [2024-07-12 11:57:27.005891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.826 [2024-07-12 11:57:27.115278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.826 [2024-07-12 11:57:27.836797] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 231f2b6a-1098-4d7d-868f-a332bacc2c76 already exists 00:20:39.826 [2024-07-12 11:57:27.836840] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:231f2b6a-1098-4d7d-868f-a332bacc2c76 alias for bdev NVMe1n1 00:20:39.826 [2024-07-12 11:57:27.836883] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:39.826 Running I/O for 1 seconds... 00:20:39.826 00:20:39.826 Latency(us) 00:20:39.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.826 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:39.826 NVMe0n1 : 1.00 18873.19 73.72 0.00 0.00 6771.89 2111.72 11942.12 00:20:39.826 =================================================================================================================== 00:20:39.826 Total : 18873.19 73.72 0.00 0.00 6771.89 2111.72 11942.12 00:20:39.826 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.826 00:20:39.826 Latency(us) 00:20:39.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.826 =================================================================================================================== 00:20:39.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.826 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.826 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.083 rmmod nvme_tcp 00:20:40.083 rmmod nvme_fabrics 00:20:40.083 rmmod nvme_keyring 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 973115 ']' 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 973115 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 973115 ']' 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 973115 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 973115 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 973115' 00:20:40.083 killing process with pid 973115 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 973115 00:20:40.083 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 973115 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.339 11:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.863 11:57:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.863 00:20:42.863 real 0m8.183s 00:20:42.863 user 0m14.351s 00:20:42.863 sys 0m2.276s 00:20:42.863 11:57:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:42.863 11:57:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.863 ************************************ 00:20:42.863 END TEST nvmf_multicontroller 00:20:42.863 ************************************ 00:20:42.863 11:57:31 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:42.863 11:57:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:42.863 11:57:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:42.863 11:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.863 ************************************ 00:20:42.863 START TEST nvmf_aer 00:20:42.863 ************************************ 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:42.863 * Looking for test storage... 00:20:42.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.863 11:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.770 11:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:20:44.770 00:20:44.770 --- 10.0.0.2 ping statistics --- 00:20:44.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.770 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:20:44.770 00:20:44.770 --- 10.0.0.1 ping statistics --- 00:20:44.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.770 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.770 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=975482 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 975482 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 975482 ']' 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:44.771 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 [2024-07-12 11:57:34.157994] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:44.771 [2024-07-12 11:57:34.158088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.771 [2024-07-12 11:57:34.243652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.028 [2024-07-12 11:57:34.382527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.028 [2024-07-12 11:57:34.382594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.028 [2024-07-12 11:57:34.382634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.028 [2024-07-12 11:57:34.382661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.028 [2024-07-12 11:57:34.382681] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.028 [2024-07-12 11:57:34.382787] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.028 [2024-07-12 11:57:34.382852] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.028 [2024-07-12 11:57:34.382918] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.028 [2024-07-12 11:57:34.382928] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 [2024-07-12 11:57:34.557733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 Malloc0 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 [2024-07-12 11:57:34.609519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:45.286 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.287 [ 00:20:45.287 { 00:20:45.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.287 "subtype": "Discovery", 00:20:45.287 "listen_addresses": [], 00:20:45.287 "allow_any_host": true, 00:20:45.287 "hosts": [] 00:20:45.287 }, 00:20:45.287 { 00:20:45.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.287 "subtype": "NVMe", 00:20:45.287 "listen_addresses": [ 00:20:45.287 { 00:20:45.287 "trtype": "TCP", 00:20:45.287 "adrfam": "IPv4", 00:20:45.287 "traddr": "10.0.0.2", 00:20:45.287 "trsvcid": "4420" 00:20:45.287 } 00:20:45.287 ], 00:20:45.287 "allow_any_host": true, 00:20:45.287 "hosts": [], 00:20:45.287 "serial_number": "SPDK00000000000001", 00:20:45.287 "model_number": "SPDK bdev Controller", 00:20:45.287 "max_namespaces": 2, 00:20:45.287 "min_cntlid": 1, 00:20:45.287 "max_cntlid": 65519, 00:20:45.287 "namespaces": [ 00:20:45.287 { 00:20:45.287 "nsid": 1, 00:20:45.287 "bdev_name": "Malloc0", 00:20:45.287 "name": "Malloc0", 00:20:45.287 "nguid": "AFAB91364D3F490FAB44900698184842", 00:20:45.287 "uuid": "afab9136-4d3f-490f-ab44-900698184842" 00:20:45.287 } 00:20:45.287 ] 00:20:45.287 } 00:20:45.287 ] 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=975548 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:20:45.287 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:20:45.287 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 Malloc1 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 Asynchronous Event Request test 00:20:45.544 Attaching to 10.0.0.2 00:20:45.544 Attached to 10.0.0.2 00:20:45.544 Registering asynchronous event callbacks... 00:20:45.544 Starting namespace attribute notice tests for all controllers... 00:20:45.544 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:45.544 aer_cb - Changed Namespace 00:20:45.544 Cleaning up... 00:20:45.544 [ 00:20:45.544 { 00:20:45.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.544 "subtype": "Discovery", 00:20:45.544 "listen_addresses": [], 00:20:45.544 "allow_any_host": true, 00:20:45.544 "hosts": [] 00:20:45.544 }, 00:20:45.544 { 00:20:45.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.544 "subtype": "NVMe", 00:20:45.544 "listen_addresses": [ 00:20:45.544 { 00:20:45.544 "trtype": "TCP", 00:20:45.544 "adrfam": "IPv4", 00:20:45.544 "traddr": "10.0.0.2", 00:20:45.544 "trsvcid": "4420" 00:20:45.544 } 00:20:45.544 ], 00:20:45.544 "allow_any_host": true, 00:20:45.544 "hosts": [], 00:20:45.544 "serial_number": "SPDK00000000000001", 00:20:45.544 "model_number": "SPDK bdev Controller", 00:20:45.544 "max_namespaces": 2, 00:20:45.544 "min_cntlid": 1, 00:20:45.544 "max_cntlid": 65519, 00:20:45.544 "namespaces": [ 00:20:45.544 { 00:20:45.544 "nsid": 1, 00:20:45.544 "bdev_name": "Malloc0", 00:20:45.544 "name": "Malloc0", 00:20:45.544 "nguid": "AFAB91364D3F490FAB44900698184842", 00:20:45.544 "uuid": "afab9136-4d3f-490f-ab44-900698184842" 00:20:45.544 }, 00:20:45.544 { 00:20:45.544 "nsid": 2, 00:20:45.544 "bdev_name": "Malloc1", 00:20:45.544 "name": "Malloc1", 00:20:45.544 "nguid": "E56AE3FDF27E4668AFA1F964A4B68617", 00:20:45.544 "uuid": "e56ae3fd-f27e-4668-afa1-f964a4b68617" 00:20:45.544 } 00:20:45.544 ] 00:20:45.544 } 00:20:45.544 ] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 975548 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.544 11:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.544 rmmod nvme_tcp 00:20:45.544 rmmod nvme_fabrics 00:20:45.544 rmmod nvme_keyring 00:20:45.544 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 975482 ']' 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 975482 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 975482 ']' 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 975482 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 975482 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 975482' 00:20:45.802 killing process with pid 975482 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 975482 00:20:45.802 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 975482 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.062 11:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.026 11:57:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:48.026 00:20:48.026 real 0m5.535s 00:20:48.026 user 0m4.422s 00:20:48.026 sys 0m2.009s 00:20:48.026 11:57:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:48.026 11:57:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:48.026 ************************************ 00:20:48.026 END TEST nvmf_aer 00:20:48.026 ************************************ 00:20:48.026 11:57:37 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:48.026 11:57:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:48.026 11:57:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:48.026 11:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.026 ************************************ 00:20:48.026 START TEST nvmf_async_init 00:20:48.026 ************************************ 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:48.026 * Looking for test storage... 00:20:48.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dc1cf9debf6642e2845b2bd8cdc3715c 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.026 11:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.563 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.563 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:20:50.564 00:20:50.564 --- 10.0.0.2 ping statistics --- 00:20:50.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.564 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:50.564 00:20:50.564 --- 10.0.0.1 ping statistics --- 00:20:50.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.564 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=977565 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 977565 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 977565 ']' 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:50.564 11:57:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.564 [2024-07-12 11:57:39.638830] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:50.564 [2024-07-12 11:57:39.638934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.564 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.564 [2024-07-12 11:57:39.707041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.564 [2024-07-12 11:57:39.822597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.564 [2024-07-12 11:57:39.822650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.564 [2024-07-12 11:57:39.822666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.564 [2024-07-12 11:57:39.822679] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.564 [2024-07-12 11:57:39.822690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.564 [2024-07-12 11:57:39.822719] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.129 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:51.129 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:20:51.129 11:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.129 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:51.129 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 [2024-07-12 11:57:40.638626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 null0 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dc1cf9debf6642e2845b2bd8cdc3715c 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.387 [2024-07-12 11:57:40.678825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.387 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.644 nvme0n1 00:20:51.644 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.644 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:51.644 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [ 00:20:51.645 { 00:20:51.645 "name": "nvme0n1", 00:20:51.645 "aliases": [ 00:20:51.645 "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c" 00:20:51.645 ], 00:20:51.645 "product_name": "NVMe disk", 00:20:51.645 "block_size": 512, 00:20:51.645 "num_blocks": 2097152, 00:20:51.645 "uuid": "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c", 00:20:51.645 "assigned_rate_limits": { 00:20:51.645 "rw_ios_per_sec": 0, 00:20:51.645 "rw_mbytes_per_sec": 0, 00:20:51.645 "r_mbytes_per_sec": 0, 00:20:51.645 "w_mbytes_per_sec": 0 00:20:51.645 }, 00:20:51.645 "claimed": false, 00:20:51.645 "zoned": false, 00:20:51.645 "supported_io_types": { 00:20:51.645 "read": true, 00:20:51.645 "write": true, 00:20:51.645 "unmap": false, 00:20:51.645 "write_zeroes": true, 00:20:51.645 "flush": true, 00:20:51.645 "reset": true, 00:20:51.645 "compare": true, 00:20:51.645 "compare_and_write": true, 00:20:51.645 "abort": true, 00:20:51.645 "nvme_admin": true, 00:20:51.645 "nvme_io": true 00:20:51.645 }, 00:20:51.645 "memory_domains": [ 00:20:51.645 { 00:20:51.645 "dma_device_id": "system", 00:20:51.645 "dma_device_type": 1 00:20:51.645 } 00:20:51.645 ], 00:20:51.645 "driver_specific": { 00:20:51.645 "nvme": [ 00:20:51.645 { 00:20:51.645 "trid": { 00:20:51.645 "trtype": "TCP", 00:20:51.645 "adrfam": "IPv4", 00:20:51.645 "traddr": "10.0.0.2", 00:20:51.645 "trsvcid": "4420", 00:20:51.645 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:51.645 }, 00:20:51.645 "ctrlr_data": { 00:20:51.645 "cntlid": 1, 00:20:51.645 "vendor_id": "0x8086", 00:20:51.645 "model_number": "SPDK bdev Controller", 00:20:51.645 "serial_number": "00000000000000000000", 00:20:51.645 "firmware_revision": "24.09", 00:20:51.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.645 "oacs": { 00:20:51.645 "security": 0, 00:20:51.645 "format": 0, 00:20:51.645 "firmware": 0, 00:20:51.645 "ns_manage": 0 00:20:51.645 }, 00:20:51.645 "multi_ctrlr": true, 00:20:51.645 "ana_reporting": false 00:20:51.645 }, 00:20:51.645 "vs": { 00:20:51.645 "nvme_version": "1.3" 00:20:51.645 }, 00:20:51.645 "ns_data": { 00:20:51.645 "id": 1, 00:20:51.645 "can_share": true 00:20:51.645 } 00:20:51.645 } 00:20:51.645 ], 00:20:51.645 "mp_policy": "active_passive" 00:20:51.645 } 00:20:51.645 } 00:20:51.645 ] 00:20:51.645 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:51.645 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-07-12 11:57:40.931505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:51.645 [2024-07-12 11:57:40.931593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e620 (9): Bad file descriptor 00:20:51.645 [2024-07-12 11:57:41.074017] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [ 00:20:51.645 { 00:20:51.645 "name": "nvme0n1", 00:20:51.645 "aliases": [ 00:20:51.645 "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c" 00:20:51.645 ], 00:20:51.645 "product_name": "NVMe disk", 00:20:51.645 "block_size": 512, 00:20:51.645 "num_blocks": 2097152, 00:20:51.645 "uuid": "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c", 00:20:51.645 "assigned_rate_limits": { 00:20:51.645 "rw_ios_per_sec": 0, 00:20:51.645 "rw_mbytes_per_sec": 0, 00:20:51.645 "r_mbytes_per_sec": 0, 00:20:51.645 "w_mbytes_per_sec": 0 00:20:51.645 }, 00:20:51.645 "claimed": false, 00:20:51.645 "zoned": false, 00:20:51.645 "supported_io_types": { 00:20:51.645 "read": true, 00:20:51.645 "write": true, 00:20:51.645 "unmap": false, 00:20:51.645 "write_zeroes": true, 00:20:51.645 "flush": true, 00:20:51.645 "reset": true, 00:20:51.645 "compare": true, 00:20:51.645 "compare_and_write": true, 00:20:51.645 "abort": true, 00:20:51.645 "nvme_admin": true, 00:20:51.645 "nvme_io": true 00:20:51.645 }, 00:20:51.645 "memory_domains": [ 00:20:51.645 { 00:20:51.645 "dma_device_id": "system", 00:20:51.645 "dma_device_type": 1 00:20:51.645 } 00:20:51.645 ], 00:20:51.645 "driver_specific": { 00:20:51.645 "nvme": [ 00:20:51.645 { 00:20:51.645 "trid": { 00:20:51.645 "trtype": "TCP", 00:20:51.645 "adrfam": "IPv4", 00:20:51.645 "traddr": "10.0.0.2", 00:20:51.645 "trsvcid": "4420", 00:20:51.645 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:51.645 }, 00:20:51.645 "ctrlr_data": { 00:20:51.645 "cntlid": 2, 00:20:51.645 "vendor_id": "0x8086", 00:20:51.645 "model_number": "SPDK bdev Controller", 00:20:51.645 "serial_number": "00000000000000000000", 00:20:51.645 "firmware_revision": "24.09", 00:20:51.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.645 "oacs": { 00:20:51.645 "security": 0, 00:20:51.645 "format": 0, 00:20:51.645 "firmware": 0, 00:20:51.645 "ns_manage": 0 00:20:51.645 }, 00:20:51.645 "multi_ctrlr": true, 00:20:51.645 "ana_reporting": false 00:20:51.645 }, 00:20:51.645 "vs": { 00:20:51.645 "nvme_version": "1.3" 00:20:51.645 }, 00:20:51.645 "ns_data": { 00:20:51.645 "id": 1, 00:20:51.645 "can_share": true 00:20:51.645 } 00:20:51.645 } 00:20:51.645 ], 00:20:51.645 "mp_policy": "active_passive" 00:20:51.645 } 00:20:51.645 } 00:20:51.645 ] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rrXJqt7Cww 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rrXJqt7Cww 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-07-12 11:57:41.124165] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.645 [2024-07-12 11:57:41.124297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rrXJqt7Cww 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-07-12 11:57:41.132196] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rrXJqt7Cww 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.645 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.904 [2024-07-12 11:57:41.140210] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.904 [2024-07-12 11:57:41.140279] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:51.904 nvme0n1 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.904 [ 00:20:51.904 { 00:20:51.904 "name": "nvme0n1", 00:20:51.904 "aliases": [ 00:20:51.904 "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c" 00:20:51.904 ], 00:20:51.904 "product_name": "NVMe disk", 00:20:51.904 "block_size": 512, 00:20:51.904 "num_blocks": 2097152, 00:20:51.904 "uuid": "dc1cf9de-bf66-42e2-845b-2bd8cdc3715c", 00:20:51.904 "assigned_rate_limits": { 00:20:51.904 "rw_ios_per_sec": 0, 00:20:51.904 "rw_mbytes_per_sec": 0, 00:20:51.904 "r_mbytes_per_sec": 0, 00:20:51.904 "w_mbytes_per_sec": 0 00:20:51.904 }, 00:20:51.904 "claimed": false, 00:20:51.904 "zoned": false, 00:20:51.904 "supported_io_types": { 00:20:51.904 "read": true, 00:20:51.904 "write": true, 00:20:51.904 "unmap": false, 00:20:51.904 "write_zeroes": true, 00:20:51.904 "flush": true, 00:20:51.904 "reset": true, 00:20:51.904 "compare": true, 00:20:51.904 "compare_and_write": true, 00:20:51.904 "abort": true, 00:20:51.904 "nvme_admin": true, 00:20:51.904 "nvme_io": true 00:20:51.904 }, 00:20:51.904 "memory_domains": [ 00:20:51.904 { 00:20:51.904 "dma_device_id": "system", 00:20:51.904 "dma_device_type": 1 00:20:51.904 } 00:20:51.904 ], 00:20:51.904 "driver_specific": { 00:20:51.904 "nvme": [ 00:20:51.904 { 00:20:51.904 "trid": { 00:20:51.904 "trtype": "TCP", 00:20:51.904 "adrfam": "IPv4", 00:20:51.904 "traddr": "10.0.0.2", 00:20:51.904 "trsvcid": "4421", 00:20:51.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:51.904 }, 00:20:51.904 "ctrlr_data": { 00:20:51.904 "cntlid": 3, 00:20:51.904 "vendor_id": "0x8086", 00:20:51.904 "model_number": "SPDK bdev Controller", 00:20:51.904 "serial_number": "00000000000000000000", 00:20:51.904 "firmware_revision": "24.09", 00:20:51.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.904 "oacs": { 00:20:51.904 "security": 0, 00:20:51.904 "format": 0, 00:20:51.904 "firmware": 0, 00:20:51.904 "ns_manage": 0 00:20:51.904 }, 00:20:51.904 "multi_ctrlr": true, 00:20:51.904 "ana_reporting": false 00:20:51.904 }, 00:20:51.904 "vs": { 00:20:51.904 "nvme_version": "1.3" 00:20:51.904 }, 00:20:51.904 "ns_data": { 00:20:51.904 "id": 1, 00:20:51.904 "can_share": true 00:20:51.904 } 00:20:51.904 } 00:20:51.904 ], 00:20:51.904 "mp_policy": "active_passive" 00:20:51.904 } 00:20:51.904 } 00:20:51.904 ] 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rrXJqt7Cww 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.904 rmmod nvme_tcp 00:20:51.904 rmmod nvme_fabrics 00:20:51.904 rmmod nvme_keyring 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 977565 ']' 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 977565 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 977565 ']' 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 977565 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 977565 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 977565' 00:20:51.904 killing process with pid 977565 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 977565 00:20:51.904 [2024-07-12 11:57:41.335011] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.904 [2024-07-12 11:57:41.335052] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:51.904 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 977565 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.163 11:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.696 11:57:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.696 00:20:54.696 real 0m6.214s 00:20:54.696 user 0m2.995s 00:20:54.696 sys 0m1.847s 00:20:54.696 11:57:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:54.696 11:57:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:54.696 ************************************ 00:20:54.696 END TEST nvmf_async_init 00:20:54.696 ************************************ 00:20:54.696 11:57:43 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.696 ************************************ 00:20:54.696 START TEST dma 00:20:54.696 ************************************ 00:20:54.696 11:57:43 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:54.696 * Looking for test storage... 00:20:54.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.696 11:57:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.696 11:57:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.696 11:57:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.696 11:57:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.696 11:57:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.696 11:57:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.696 11:57:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.696 11:57:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:54.696 11:57:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.696 11:57:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.696 11:57:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:54.696 11:57:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:54.696 00:20:54.696 real 0m0.070s 00:20:54.696 user 0m0.028s 00:20:54.696 sys 0m0.048s 00:20:54.696 11:57:43 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:54.696 11:57:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:54.696 ************************************ 00:20:54.696 END TEST dma 00:20:54.696 ************************************ 00:20:54.696 11:57:43 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:54.696 11:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.696 ************************************ 00:20:54.696 START TEST nvmf_identify 00:20:54.696 ************************************ 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:54.696 * Looking for test storage... 00:20:54.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.696 11:57:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.697 11:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:56.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:56.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:56.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:56.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.599 11:57:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:20:56.599 00:20:56.599 --- 10.0.0.2 ping statistics --- 00:20:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.599 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:20:56.599 00:20:56.599 --- 10.0.0.1 ping statistics --- 00:20:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.599 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=979750 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 979750 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 979750 ']' 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:56.599 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.599 [2024-07-12 11:57:46.091667] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:56.600 [2024-07-12 11:57:46.091773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.858 [2024-07-12 11:57:46.163817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.858 [2024-07-12 11:57:46.279889] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.858 [2024-07-12 11:57:46.279938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.858 [2024-07-12 11:57:46.279968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.858 [2024-07-12 11:57:46.279980] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.858 [2024-07-12 11:57:46.279990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.858 [2024-07-12 11:57:46.280048] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.858 [2024-07-12 11:57:46.280075] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.858 [2024-07-12 11:57:46.280125] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.858 [2024-07-12 11:57:46.280128] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 [2024-07-12 11:57:46.415668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 Malloc0 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 [2024-07-12 11:57:46.493454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.118 [ 00:20:57.118 { 00:20:57.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.118 "subtype": "Discovery", 00:20:57.118 "listen_addresses": [ 00:20:57.118 { 00:20:57.118 "trtype": "TCP", 00:20:57.118 "adrfam": "IPv4", 00:20:57.118 "traddr": "10.0.0.2", 00:20:57.118 "trsvcid": "4420" 00:20:57.118 } 00:20:57.118 ], 00:20:57.118 "allow_any_host": true, 00:20:57.118 "hosts": [] 00:20:57.118 }, 00:20:57.118 { 00:20:57.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.118 "subtype": "NVMe", 00:20:57.118 "listen_addresses": [ 00:20:57.118 { 00:20:57.118 "trtype": "TCP", 00:20:57.118 "adrfam": "IPv4", 00:20:57.118 "traddr": "10.0.0.2", 00:20:57.118 "trsvcid": "4420" 00:20:57.118 } 00:20:57.118 ], 00:20:57.118 "allow_any_host": true, 00:20:57.118 "hosts": [], 00:20:57.118 "serial_number": "SPDK00000000000001", 00:20:57.118 "model_number": "SPDK bdev Controller", 00:20:57.118 "max_namespaces": 32, 00:20:57.118 "min_cntlid": 1, 00:20:57.118 "max_cntlid": 65519, 00:20:57.118 "namespaces": [ 00:20:57.118 { 00:20:57.118 "nsid": 1, 00:20:57.118 "bdev_name": "Malloc0", 00:20:57.118 "name": "Malloc0", 00:20:57.118 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:57.118 "eui64": "ABCDEF0123456789", 00:20:57.118 "uuid": "2c2e33a3-9dde-43af-ab87-75cf14e333a7" 00:20:57.118 } 00:20:57.118 ] 00:20:57.118 } 00:20:57.118 ] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.118 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:57.118 [2024-07-12 11:57:46.538028] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:57.118 [2024-07-12 11:57:46.538073] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979844 ] 00:20:57.118 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.118 [2024-07-12 11:57:46.574572] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:57.118 [2024-07-12 11:57:46.574632] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:57.118 [2024-07-12 11:57:46.574642] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:57.118 [2024-07-12 11:57:46.574658] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:57.118 [2024-07-12 11:57:46.574672] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:57.118 [2024-07-12 11:57:46.574922] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:57.118 [2024-07-12 11:57:46.574978] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e9b540 0 00:20:57.118 [2024-07-12 11:57:46.585885] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:57.118 [2024-07-12 11:57:46.585906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:57.118 [2024-07-12 11:57:46.585915] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:57.118 [2024-07-12 11:57:46.585921] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:57.118 [2024-07-12 11:57:46.585974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.585987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.585994] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.118 [2024-07-12 11:57:46.586012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:57.118 [2024-07-12 11:57:46.586038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.118 [2024-07-12 11:57:46.592878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.118 [2024-07-12 11:57:46.592896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.118 [2024-07-12 11:57:46.592903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.592911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.118 [2024-07-12 11:57:46.592932] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:57.118 [2024-07-12 11:57:46.592943] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:57.118 [2024-07-12 11:57:46.592953] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:57.118 [2024-07-12 11:57:46.592974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.592983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.592989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.118 [2024-07-12 11:57:46.593005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.118 [2024-07-12 11:57:46.593029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.118 [2024-07-12 11:57:46.593130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.118 [2024-07-12 11:57:46.593144] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.118 [2024-07-12 11:57:46.593150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.593157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.118 [2024-07-12 11:57:46.593167] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:57.118 [2024-07-12 11:57:46.593180] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:57.118 [2024-07-12 11:57:46.593192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.593199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.593206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.118 [2024-07-12 11:57:46.593216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.118 [2024-07-12 11:57:46.593237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.118 [2024-07-12 11:57:46.593312] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.118 [2024-07-12 11:57:46.593323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.118 [2024-07-12 11:57:46.593330] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.593336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.118 [2024-07-12 11:57:46.593346] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:57.118 [2024-07-12 11:57:46.593360] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:57.118 [2024-07-12 11:57:46.593372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.118 [2024-07-12 11:57:46.593379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.119 [2024-07-12 11:57:46.593396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.119 [2024-07-12 11:57:46.593416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.119 [2024-07-12 11:57:46.593500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.119 [2024-07-12 11:57:46.593513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.119 [2024-07-12 11:57:46.593520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.119 [2024-07-12 11:57:46.593536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:57.119 [2024-07-12 11:57:46.593553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593568] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.119 [2024-07-12 11:57:46.593578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.119 [2024-07-12 11:57:46.593598] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.119 [2024-07-12 11:57:46.593672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.119 [2024-07-12 11:57:46.593684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.119 [2024-07-12 11:57:46.593691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.119 [2024-07-12 11:57:46.593707] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:57.119 [2024-07-12 11:57:46.593716] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:57.119 [2024-07-12 11:57:46.593728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:57.119 [2024-07-12 11:57:46.593838] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:57.119 [2024-07-12 11:57:46.593846] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:57.119 [2024-07-12 11:57:46.593859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.593884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.119 [2024-07-12 11:57:46.593895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.119 [2024-07-12 11:57:46.593916] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.119 [2024-07-12 11:57:46.594013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.119 [2024-07-12 11:57:46.594027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.119 [2024-07-12 11:57:46.594034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594040] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.119 [2024-07-12 11:57:46.594050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:57.119 [2024-07-12 11:57:46.594066] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.119 [2024-07-12 11:57:46.594092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.119 [2024-07-12 11:57:46.594111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.119 [2024-07-12 11:57:46.594186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.119 [2024-07-12 11:57:46.594199] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.119 [2024-07-12 11:57:46.594206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594212] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.119 [2024-07-12 11:57:46.594222] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:57.119 [2024-07-12 11:57:46.594230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:57.119 [2024-07-12 11:57:46.594243] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:57.119 [2024-07-12 11:57:46.594261] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:57.119 [2024-07-12 11:57:46.594280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594289] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.119 [2024-07-12 11:57:46.594300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.119 [2024-07-12 11:57:46.594320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.119 [2024-07-12 11:57:46.594438] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.119 [2024-07-12 11:57:46.594452] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.119 [2024-07-12 11:57:46.594459] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594466] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9b540): datao=0, datal=4096, cccid=0 00:20:57.119 [2024-07-12 11:57:46.594473] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efb3a0) on tqpair(0x1e9b540): expected_datao=0, payload_size=4096 00:20:57.119 [2024-07-12 11:57:46.594481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594498] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.119 [2024-07-12 11:57:46.594508] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.634943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.381 [2024-07-12 11:57:46.634964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.381 [2024-07-12 11:57:46.634972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.634980] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.381 [2024-07-12 11:57:46.634993] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:57.381 [2024-07-12 11:57:46.635003] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:57.381 [2024-07-12 11:57:46.635011] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:57.381 [2024-07-12 11:57:46.635024] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:57.381 [2024-07-12 11:57:46.635034] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:57.381 [2024-07-12 11:57:46.635042] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:57.381 [2024-07-12 11:57:46.635057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:57.381 [2024-07-12 11:57:46.635069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:57.381 [2024-07-12 11:57:46.635118] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.381 [2024-07-12 11:57:46.635203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.381 [2024-07-12 11:57:46.635217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.381 [2024-07-12 11:57:46.635224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635231] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb3a0) on tqpair=0x1e9b540 00:20:57.381 [2024-07-12 11:57:46.635244] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635256] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.381 [2024-07-12 11:57:46.635283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635290] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635296] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.381 [2024-07-12 11:57:46.635314] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.381 [2024-07-12 11:57:46.635345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635352] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.381 [2024-07-12 11:57:46.635375] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:57.381 [2024-07-12 11:57:46.635395] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:57.381 [2024-07-12 11:57:46.635408] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635415] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.381 [2024-07-12 11:57:46.635447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb3a0, cid 0, qid 0 00:20:57.381 [2024-07-12 11:57:46.635458] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb500, cid 1, qid 0 00:20:57.381 [2024-07-12 11:57:46.635465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb660, cid 2, qid 0 00:20:57.381 [2024-07-12 11:57:46.635473] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.381 [2024-07-12 11:57:46.635480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb920, cid 4, qid 0 00:20:57.381 [2024-07-12 11:57:46.635593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.381 [2024-07-12 11:57:46.635605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.381 [2024-07-12 11:57:46.635612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb920) on tqpair=0x1e9b540 00:20:57.381 [2024-07-12 11:57:46.635629] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:57.381 [2024-07-12 11:57:46.635638] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:57.381 [2024-07-12 11:57:46.635655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.381 [2024-07-12 11:57:46.635665] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9b540) 00:20:57.381 [2024-07-12 11:57:46.635679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.381 [2024-07-12 11:57:46.635700] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb920, cid 4, qid 0 00:20:57.381 [2024-07-12 11:57:46.635791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.382 [2024-07-12 11:57:46.635803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.382 [2024-07-12 11:57:46.635810] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635816] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9b540): datao=0, datal=4096, cccid=4 00:20:57.382 [2024-07-12 11:57:46.635824] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efb920) on tqpair(0x1e9b540): expected_datao=0, payload_size=4096 00:20:57.382 [2024-07-12 11:57:46.635831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635841] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635849] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635860] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.382 [2024-07-12 11:57:46.635877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.382 [2024-07-12 11:57:46.635885] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb920) on tqpair=0x1e9b540 00:20:57.382 [2024-07-12 11:57:46.635912] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:57.382 [2024-07-12 11:57:46.635947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9b540) 00:20:57.382 [2024-07-12 11:57:46.635969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.382 [2024-07-12 11:57:46.635980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.635993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e9b540) 00:20:57.382 [2024-07-12 11:57:46.636002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.382 [2024-07-12 11:57:46.636028] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb920, cid 4, qid 0 00:20:57.382 [2024-07-12 11:57:46.636040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efba80, cid 5, qid 0 00:20:57.382 [2024-07-12 11:57:46.636162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.382 [2024-07-12 11:57:46.636174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.382 [2024-07-12 11:57:46.636181] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.636187] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9b540): datao=0, datal=1024, cccid=4 00:20:57.382 [2024-07-12 11:57:46.636195] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efb920) on tqpair(0x1e9b540): expected_datao=0, payload_size=1024 00:20:57.382 [2024-07-12 11:57:46.636202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.636211] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.636219] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.636227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.382 [2024-07-12 11:57:46.636236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.382 [2024-07-12 11:57:46.636243] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.636249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efba80) on tqpair=0x1e9b540 00:20:57.382 [2024-07-12 11:57:46.676949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.382 [2024-07-12 11:57:46.676968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.382 [2024-07-12 11:57:46.676976] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.676983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb920) on tqpair=0x1e9b540 00:20:57.382 [2024-07-12 11:57:46.677009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.677020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9b540) 00:20:57.382 [2024-07-12 11:57:46.677032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.382 [2024-07-12 11:57:46.677062] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb920, cid 4, qid 0 00:20:57.382 [2024-07-12 11:57:46.677166] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.382 [2024-07-12 11:57:46.677178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.382 [2024-07-12 11:57:46.677185] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.677192] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9b540): datao=0, datal=3072, cccid=4 00:20:57.382 [2024-07-12 11:57:46.677199] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efb920) on tqpair(0x1e9b540): expected_datao=0, payload_size=3072 00:20:57.382 [2024-07-12 11:57:46.677207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.677227] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.677236] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.717937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.382 [2024-07-12 11:57:46.717955] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.382 [2024-07-12 11:57:46.717963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.717970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb920) on tqpair=0x1e9b540 00:20:57.382 [2024-07-12 11:57:46.717987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.717996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9b540) 00:20:57.382 [2024-07-12 11:57:46.718007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.382 [2024-07-12 11:57:46.718036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb920, cid 4, qid 0 00:20:57.382 [2024-07-12 11:57:46.718135] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.382 [2024-07-12 11:57:46.718149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.382 [2024-07-12 11:57:46.718155] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.718162] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9b540): datao=0, datal=8, cccid=4 00:20:57.382 [2024-07-12 11:57:46.718170] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efb920) on tqpair(0x1e9b540): expected_datao=0, payload_size=8 00:20:57.382 [2024-07-12 11:57:46.718177] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.718187] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.718194] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.761884] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.382 [2024-07-12 11:57:46.761903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.382 [2024-07-12 11:57:46.761912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.382 [2024-07-12 11:57:46.761920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb920) on tqpair=0x1e9b540 00:20:57.382 ===================================================== 00:20:57.382 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:57.382 ===================================================== 00:20:57.382 Controller Capabilities/Features 00:20:57.382 ================================ 00:20:57.382 Vendor ID: 0000 00:20:57.382 Subsystem Vendor ID: 0000 00:20:57.382 Serial Number: .................... 00:20:57.382 Model Number: ........................................ 00:20:57.382 Firmware Version: 24.09 00:20:57.382 Recommended Arb Burst: 0 00:20:57.382 IEEE OUI Identifier: 00 00 00 00:20:57.382 Multi-path I/O 00:20:57.382 May have multiple subsystem ports: No 00:20:57.382 May have multiple controllers: No 00:20:57.382 Associated with SR-IOV VF: No 00:20:57.382 Max Data Transfer Size: 131072 00:20:57.382 Max Number of Namespaces: 0 00:20:57.382 Max Number of I/O Queues: 1024 00:20:57.382 NVMe Specification Version (VS): 1.3 00:20:57.382 NVMe Specification Version (Identify): 1.3 00:20:57.382 Maximum Queue Entries: 128 00:20:57.382 Contiguous Queues Required: Yes 00:20:57.382 Arbitration Mechanisms Supported 00:20:57.382 Weighted Round Robin: Not Supported 00:20:57.382 Vendor Specific: Not Supported 00:20:57.382 Reset Timeout: 15000 ms 00:20:57.382 Doorbell Stride: 4 bytes 00:20:57.382 NVM Subsystem Reset: Not Supported 00:20:57.382 Command Sets Supported 00:20:57.382 NVM Command Set: Supported 00:20:57.382 Boot Partition: Not Supported 00:20:57.382 Memory Page Size Minimum: 4096 bytes 00:20:57.382 Memory Page Size Maximum: 4096 bytes 00:20:57.382 Persistent Memory Region: Not Supported 00:20:57.382 Optional Asynchronous Events Supported 00:20:57.382 Namespace Attribute Notices: Not Supported 00:20:57.382 Firmware Activation Notices: Not Supported 00:20:57.382 ANA Change Notices: Not Supported 00:20:57.382 PLE Aggregate Log Change Notices: Not Supported 00:20:57.382 LBA Status Info Alert Notices: Not Supported 00:20:57.382 EGE Aggregate Log Change Notices: Not Supported 00:20:57.383 Normal NVM Subsystem Shutdown event: Not Supported 00:20:57.383 Zone Descriptor Change Notices: Not Supported 00:20:57.383 Discovery Log Change Notices: Supported 00:20:57.383 Controller Attributes 00:20:57.383 128-bit Host Identifier: Not Supported 00:20:57.383 Non-Operational Permissive Mode: Not Supported 00:20:57.383 NVM Sets: Not Supported 00:20:57.383 Read Recovery Levels: Not Supported 00:20:57.383 Endurance Groups: Not Supported 00:20:57.383 Predictable Latency Mode: Not Supported 00:20:57.383 Traffic Based Keep ALive: Not Supported 00:20:57.383 Namespace Granularity: Not Supported 00:20:57.383 SQ Associations: Not Supported 00:20:57.383 UUID List: Not Supported 00:20:57.383 Multi-Domain Subsystem: Not Supported 00:20:57.383 Fixed Capacity Management: Not Supported 00:20:57.383 Variable Capacity Management: Not Supported 00:20:57.383 Delete Endurance Group: Not Supported 00:20:57.383 Delete NVM Set: Not Supported 00:20:57.383 Extended LBA Formats Supported: Not Supported 00:20:57.383 Flexible Data Placement Supported: Not Supported 00:20:57.383 00:20:57.383 Controller Memory Buffer Support 00:20:57.383 ================================ 00:20:57.383 Supported: No 00:20:57.383 00:20:57.383 Persistent Memory Region Support 00:20:57.383 ================================ 00:20:57.383 Supported: No 00:20:57.383 00:20:57.383 Admin Command Set Attributes 00:20:57.383 ============================ 00:20:57.383 Security Send/Receive: Not Supported 00:20:57.383 Format NVM: Not Supported 00:20:57.383 Firmware Activate/Download: Not Supported 00:20:57.383 Namespace Management: Not Supported 00:20:57.383 Device Self-Test: Not Supported 00:20:57.383 Directives: Not Supported 00:20:57.383 NVMe-MI: Not Supported 00:20:57.383 Virtualization Management: Not Supported 00:20:57.383 Doorbell Buffer Config: Not Supported 00:20:57.383 Get LBA Status Capability: Not Supported 00:20:57.383 Command & Feature Lockdown Capability: Not Supported 00:20:57.383 Abort Command Limit: 1 00:20:57.383 Async Event Request Limit: 4 00:20:57.383 Number of Firmware Slots: N/A 00:20:57.383 Firmware Slot 1 Read-Only: N/A 00:20:57.383 Firmware Activation Without Reset: N/A 00:20:57.383 Multiple Update Detection Support: N/A 00:20:57.383 Firmware Update Granularity: No Information Provided 00:20:57.383 Per-Namespace SMART Log: No 00:20:57.383 Asymmetric Namespace Access Log Page: Not Supported 00:20:57.383 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:57.383 Command Effects Log Page: Not Supported 00:20:57.383 Get Log Page Extended Data: Supported 00:20:57.383 Telemetry Log Pages: Not Supported 00:20:57.383 Persistent Event Log Pages: Not Supported 00:20:57.383 Supported Log Pages Log Page: May Support 00:20:57.383 Commands Supported & Effects Log Page: Not Supported 00:20:57.383 Feature Identifiers & Effects Log Page:May Support 00:20:57.383 NVMe-MI Commands & Effects Log Page: May Support 00:20:57.383 Data Area 4 for Telemetry Log: Not Supported 00:20:57.383 Error Log Page Entries Supported: 128 00:20:57.383 Keep Alive: Not Supported 00:20:57.383 00:20:57.383 NVM Command Set Attributes 00:20:57.383 ========================== 00:20:57.383 Submission Queue Entry Size 00:20:57.383 Max: 1 00:20:57.383 Min: 1 00:20:57.383 Completion Queue Entry Size 00:20:57.383 Max: 1 00:20:57.383 Min: 1 00:20:57.383 Number of Namespaces: 0 00:20:57.383 Compare Command: Not Supported 00:20:57.383 Write Uncorrectable Command: Not Supported 00:20:57.383 Dataset Management Command: Not Supported 00:20:57.383 Write Zeroes Command: Not Supported 00:20:57.383 Set Features Save Field: Not Supported 00:20:57.383 Reservations: Not Supported 00:20:57.383 Timestamp: Not Supported 00:20:57.383 Copy: Not Supported 00:20:57.383 Volatile Write Cache: Not Present 00:20:57.383 Atomic Write Unit (Normal): 1 00:20:57.383 Atomic Write Unit (PFail): 1 00:20:57.383 Atomic Compare & Write Unit: 1 00:20:57.383 Fused Compare & Write: Supported 00:20:57.383 Scatter-Gather List 00:20:57.383 SGL Command Set: Supported 00:20:57.383 SGL Keyed: Supported 00:20:57.383 SGL Bit Bucket Descriptor: Not Supported 00:20:57.383 SGL Metadata Pointer: Not Supported 00:20:57.383 Oversized SGL: Not Supported 00:20:57.383 SGL Metadata Address: Not Supported 00:20:57.383 SGL Offset: Supported 00:20:57.383 Transport SGL Data Block: Not Supported 00:20:57.383 Replay Protected Memory Block: Not Supported 00:20:57.383 00:20:57.383 Firmware Slot Information 00:20:57.383 ========================= 00:20:57.383 Active slot: 0 00:20:57.383 00:20:57.383 00:20:57.383 Error Log 00:20:57.383 ========= 00:20:57.383 00:20:57.383 Active Namespaces 00:20:57.383 ================= 00:20:57.383 Discovery Log Page 00:20:57.383 ================== 00:20:57.383 Generation Counter: 2 00:20:57.383 Number of Records: 2 00:20:57.383 Record Format: 0 00:20:57.383 00:20:57.383 Discovery Log Entry 0 00:20:57.383 ---------------------- 00:20:57.383 Transport Type: 3 (TCP) 00:20:57.383 Address Family: 1 (IPv4) 00:20:57.383 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:57.383 Entry Flags: 00:20:57.383 Duplicate Returned Information: 1 00:20:57.383 Explicit Persistent Connection Support for Discovery: 1 00:20:57.383 Transport Requirements: 00:20:57.383 Secure Channel: Not Required 00:20:57.383 Port ID: 0 (0x0000) 00:20:57.383 Controller ID: 65535 (0xffff) 00:20:57.383 Admin Max SQ Size: 128 00:20:57.383 Transport Service Identifier: 4420 00:20:57.383 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:57.383 Transport Address: 10.0.0.2 00:20:57.383 Discovery Log Entry 1 00:20:57.383 ---------------------- 00:20:57.383 Transport Type: 3 (TCP) 00:20:57.383 Address Family: 1 (IPv4) 00:20:57.383 Subsystem Type: 2 (NVM Subsystem) 00:20:57.383 Entry Flags: 00:20:57.383 Duplicate Returned Information: 0 00:20:57.383 Explicit Persistent Connection Support for Discovery: 0 00:20:57.383 Transport Requirements: 00:20:57.383 Secure Channel: Not Required 00:20:57.383 Port ID: 0 (0x0000) 00:20:57.383 Controller ID: 65535 (0xffff) 00:20:57.383 Admin Max SQ Size: 128 00:20:57.383 Transport Service Identifier: 4420 00:20:57.383 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:57.383 Transport Address: 10.0.0.2 [2024-07-12 11:57:46.762036] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:57.383 [2024-07-12 11:57:46.762062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.383 [2024-07-12 11:57:46.762075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.383 [2024-07-12 11:57:46.762086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.383 [2024-07-12 11:57:46.762097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.383 [2024-07-12 11:57:46.762112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.383 [2024-07-12 11:57:46.762122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.383 [2024-07-12 11:57:46.762130] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.383 [2024-07-12 11:57:46.762142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.383 [2024-07-12 11:57:46.762167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.383 [2024-07-12 11:57:46.762243] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.383 [2024-07-12 11:57:46.762255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.383 [2024-07-12 11:57:46.762262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.762286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762296] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.762315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.762342] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.762441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.762456] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.762465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.762482] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:57.384 [2024-07-12 11:57:46.762491] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:57.384 [2024-07-12 11:57:46.762507] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762516] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.762535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.762556] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.762634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.762648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.762655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.762685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762702] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.762713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.762734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.762811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.762824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.762831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.762855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762874] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.762882] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.762893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.762914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.762990] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763010] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763033] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763048] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763246] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763377] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763417] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763511] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763671] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763702] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.384 [2024-07-12 11:57:46.763831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.384 [2024-07-12 11:57:46.763838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.384 [2024-07-12 11:57:46.763861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763879] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.384 [2024-07-12 11:57:46.763886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.384 [2024-07-12 11:57:46.763896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.384 [2024-07-12 11:57:46.763917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.384 [2024-07-12 11:57:46.763995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764007] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764021] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764038] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764047] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.764162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.764327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764341] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.764496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764516] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764542] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764558] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.764664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764684] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764691] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764708] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764724] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.764839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.764853] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.764860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764874] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.764893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.764908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.764919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.764939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.765016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.765029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.765036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765043] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.765059] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765075] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.765085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.765105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.765177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.765189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.765195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765202] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.765219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.765245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.765265] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.765339] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.765351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.765358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.765381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.765407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.765431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.385 [2024-07-12 11:57:46.765513] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.385 [2024-07-12 11:57:46.765527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.385 [2024-07-12 11:57:46.765534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.385 [2024-07-12 11:57:46.765557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.385 [2024-07-12 11:57:46.765573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.385 [2024-07-12 11:57:46.765583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.385 [2024-07-12 11:57:46.765603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.386 [2024-07-12 11:57:46.765683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.386 [2024-07-12 11:57:46.765697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.386 [2024-07-12 11:57:46.765703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.765710] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.386 [2024-07-12 11:57:46.765726] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.765735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.765741] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.386 [2024-07-12 11:57:46.765752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.386 [2024-07-12 11:57:46.765771] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.386 [2024-07-12 11:57:46.765849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.386 [2024-07-12 11:57:46.765861] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.386 [2024-07-12 11:57:46.769878] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.769889] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.386 [2024-07-12 11:57:46.769908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.769918] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.769924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9b540) 00:20:57.386 [2024-07-12 11:57:46.769935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.386 [2024-07-12 11:57:46.769956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efb7c0, cid 3, qid 0 00:20:57.386 [2024-07-12 11:57:46.770039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.386 [2024-07-12 11:57:46.770053] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.386 [2024-07-12 11:57:46.770059] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.770066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efb7c0) on tqpair=0x1e9b540 00:20:57.386 [2024-07-12 11:57:46.770080] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:57.386 00:20:57.386 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:57.386 [2024-07-12 11:57:46.805396] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:20:57.386 [2024-07-12 11:57:46.805439] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979852 ] 00:20:57.386 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.386 [2024-07-12 11:57:46.841373] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:57.386 [2024-07-12 11:57:46.841428] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:57.386 [2024-07-12 11:57:46.841438] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:57.386 [2024-07-12 11:57:46.841457] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:57.386 [2024-07-12 11:57:46.841470] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:57.386 [2024-07-12 11:57:46.841666] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:57.386 [2024-07-12 11:57:46.841705] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf48540 0 00:20:57.386 [2024-07-12 11:57:46.847878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:57.386 [2024-07-12 11:57:46.847897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:57.386 [2024-07-12 11:57:46.847906] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:57.386 [2024-07-12 11:57:46.847913] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:57.386 [2024-07-12 11:57:46.847954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.847966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.847973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.386 [2024-07-12 11:57:46.847987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:57.386 [2024-07-12 11:57:46.848014] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.386 [2024-07-12 11:57:46.855880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.386 [2024-07-12 11:57:46.855897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.386 [2024-07-12 11:57:46.855905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.855912] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.386 [2024-07-12 11:57:46.855930] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:57.386 [2024-07-12 11:57:46.855942] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:57.386 [2024-07-12 11:57:46.855952] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:57.386 [2024-07-12 11:57:46.855969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.855978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.855985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.386 [2024-07-12 11:57:46.855997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.386 [2024-07-12 11:57:46.856020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.386 [2024-07-12 11:57:46.856138] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.386 [2024-07-12 11:57:46.856153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.386 [2024-07-12 11:57:46.856165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.386 [2024-07-12 11:57:46.856172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.386 [2024-07-12 11:57:46.856181] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:57.387 [2024-07-12 11:57:46.856195] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:57.387 [2024-07-12 11:57:46.856208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.856233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.856254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.856341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.387 [2024-07-12 11:57:46.856354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.387 [2024-07-12 11:57:46.856361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856368] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.387 [2024-07-12 11:57:46.856377] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:57.387 [2024-07-12 11:57:46.856391] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.856404] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856418] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.856429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.856450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.856530] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.387 [2024-07-12 11:57:46.856544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.387 [2024-07-12 11:57:46.856551] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.387 [2024-07-12 11:57:46.856567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.856584] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856600] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.856611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.856631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.856711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.387 [2024-07-12 11:57:46.856723] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.387 [2024-07-12 11:57:46.856730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.387 [2024-07-12 11:57:46.856745] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:57.387 [2024-07-12 11:57:46.856759] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.856773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.856883] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:57.387 [2024-07-12 11:57:46.856892] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.856906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.856920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.856931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.856953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.857071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.387 [2024-07-12 11:57:46.857083] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.387 [2024-07-12 11:57:46.857090] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.387 [2024-07-12 11:57:46.857105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:57.387 [2024-07-12 11:57:46.857122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857138] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.857149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.857169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.857256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.387 [2024-07-12 11:57:46.857268] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.387 [2024-07-12 11:57:46.857275] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.387 [2024-07-12 11:57:46.857290] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:57.387 [2024-07-12 11:57:46.857299] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:57.387 [2024-07-12 11:57:46.857313] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:57.387 [2024-07-12 11:57:46.857330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:57.387 [2024-07-12 11:57:46.857345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.387 [2024-07-12 11:57:46.857365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.387 [2024-07-12 11:57:46.857386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.387 [2024-07-12 11:57:46.857524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.387 [2024-07-12 11:57:46.857537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.387 [2024-07-12 11:57:46.857544] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857551] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=4096, cccid=0 00:20:57.387 [2024-07-12 11:57:46.857559] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa83a0) on tqpair(0xf48540): expected_datao=0, payload_size=4096 00:20:57.387 [2024-07-12 11:57:46.857566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857583] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.387 [2024-07-12 11:57:46.857593] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.899877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.649 [2024-07-12 11:57:46.899898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.649 [2024-07-12 11:57:46.899907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.899914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.649 [2024-07-12 11:57:46.899926] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:57.649 [2024-07-12 11:57:46.899936] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:57.649 [2024-07-12 11:57:46.899944] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:57.649 [2024-07-12 11:57:46.899956] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:57.649 [2024-07-12 11:57:46.899964] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:57.649 [2024-07-12 11:57:46.899973] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.899988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:57.649 [2024-07-12 11:57:46.900069] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.649 [2024-07-12 11:57:46.900187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.649 [2024-07-12 11:57:46.900200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.649 [2024-07-12 11:57:46.900208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa83a0) on tqpair=0xf48540 00:20:57.649 [2024-07-12 11:57:46.900227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.649 [2024-07-12 11:57:46.900263] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.649 [2024-07-12 11:57:46.900302] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900309] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.649 [2024-07-12 11:57:46.900334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900348] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.649 [2024-07-12 11:57:46.900366] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900400] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.649 [2024-07-12 11:57:46.900454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa83a0, cid 0, qid 0 00:20:57.649 [2024-07-12 11:57:46.900480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8500, cid 1, qid 0 00:20:57.649 [2024-07-12 11:57:46.900489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8660, cid 2, qid 0 00:20:57.649 [2024-07-12 11:57:46.900497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.649 [2024-07-12 11:57:46.900505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.649 [2024-07-12 11:57:46.900655] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.649 [2024-07-12 11:57:46.900670] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.649 [2024-07-12 11:57:46.900677] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.649 [2024-07-12 11:57:46.900693] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:57.649 [2024-07-12 11:57:46.900702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900717] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.900740] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.900755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.649 [2024-07-12 11:57:46.900766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:57.649 [2024-07-12 11:57:46.900802] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.649 [2024-07-12 11:57:46.900981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.649 [2024-07-12 11:57:46.900996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.649 [2024-07-12 11:57:46.901004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.649 [2024-07-12 11:57:46.901011] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.649 [2024-07-12 11:57:46.901068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:57.649 [2024-07-12 11:57:46.901089] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.901104] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901112] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.650 [2024-07-12 11:57:46.901124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.650 [2024-07-12 11:57:46.901146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.650 [2024-07-12 11:57:46.901275] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.650 [2024-07-12 11:57:46.901291] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.650 [2024-07-12 11:57:46.901298] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901305] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=4096, cccid=4 00:20:57.650 [2024-07-12 11:57:46.901313] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8920) on tqpair(0xf48540): expected_datao=0, payload_size=4096 00:20:57.650 [2024-07-12 11:57:46.901320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901331] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901339] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901351] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.650 [2024-07-12 11:57:46.901361] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.650 [2024-07-12 11:57:46.901368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901375] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.650 [2024-07-12 11:57:46.901390] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:57.650 [2024-07-12 11:57:46.901408] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.901426] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.901440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.650 [2024-07-12 11:57:46.901459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.650 [2024-07-12 11:57:46.901481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.650 [2024-07-12 11:57:46.901589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.650 [2024-07-12 11:57:46.901602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.650 [2024-07-12 11:57:46.901609] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901616] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=4096, cccid=4 00:20:57.650 [2024-07-12 11:57:46.901623] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8920) on tqpair(0xf48540): expected_datao=0, payload_size=4096 00:20:57.650 [2024-07-12 11:57:46.901637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901649] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901657] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.650 [2024-07-12 11:57:46.901678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.650 [2024-07-12 11:57:46.901685] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.650 [2024-07-12 11:57:46.901711] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.901730] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.901744] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.650 [2024-07-12 11:57:46.901764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.650 [2024-07-12 11:57:46.901785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.650 [2024-07-12 11:57:46.901880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.650 [2024-07-12 11:57:46.901894] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.650 [2024-07-12 11:57:46.901902] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901908] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=4096, cccid=4 00:20:57.650 [2024-07-12 11:57:46.901916] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8920) on tqpair(0xf48540): expected_datao=0, payload_size=4096 00:20:57.650 [2024-07-12 11:57:46.901924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901934] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901942] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.650 [2024-07-12 11:57:46.901964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.650 [2024-07-12 11:57:46.901971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.901977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.650 [2024-07-12 11:57:46.901990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902006] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902033] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902042] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902051] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:57.650 [2024-07-12 11:57:46.902059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:57.650 [2024-07-12 11:57:46.902068] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:57.650 [2024-07-12 11:57:46.902093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.902103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.650 [2024-07-12 11:57:46.902115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.650 [2024-07-12 11:57:46.902126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.902134] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.902140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf48540) 00:20:57.650 [2024-07-12 11:57:46.902150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.650 [2024-07-12 11:57:46.902189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.650 [2024-07-12 11:57:46.902202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8a80, cid 5, qid 0 00:20:57.650 [2024-07-12 11:57:46.902364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.650 [2024-07-12 11:57:46.902377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.650 [2024-07-12 11:57:46.902384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.902391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.650 [2024-07-12 11:57:46.902401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.650 [2024-07-12 11:57:46.902411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.650 [2024-07-12 11:57:46.902418] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.650 [2024-07-12 11:57:46.902425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8a80) on tqpair=0xf48540 00:20:57.650 [2024-07-12 11:57:46.902440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902449] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.902460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.902481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8a80, cid 5, qid 0 00:20:57.651 [2024-07-12 11:57:46.902569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.902581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.651 [2024-07-12 11:57:46.902588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8a80) on tqpair=0xf48540 00:20:57.651 [2024-07-12 11:57:46.902611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.902630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.902651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8a80, cid 5, qid 0 00:20:57.651 [2024-07-12 11:57:46.902735] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.902749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.651 [2024-07-12 11:57:46.902756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8a80) on tqpair=0xf48540 00:20:57.651 [2024-07-12 11:57:46.902779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.902803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.902825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8a80, cid 5, qid 0 00:20:57.651 [2024-07-12 11:57:46.902913] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.902929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.651 [2024-07-12 11:57:46.902936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8a80) on tqpair=0xf48540 00:20:57.651 [2024-07-12 11:57:46.902964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.902974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.902985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.902998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.903016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.903028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903036] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.903045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.903058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903065] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf48540) 00:20:57.651 [2024-07-12 11:57:46.903075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.651 [2024-07-12 11:57:46.903097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8a80, cid 5, qid 0 00:20:57.651 [2024-07-12 11:57:46.903109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8920, cid 4, qid 0 00:20:57.651 [2024-07-12 11:57:46.903117] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8be0, cid 6, qid 0 00:20:57.651 [2024-07-12 11:57:46.903125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8d40, cid 7, qid 0 00:20:57.651 [2024-07-12 11:57:46.903329] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.651 [2024-07-12 11:57:46.903344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.651 [2024-07-12 11:57:46.903352] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903358] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=8192, cccid=5 00:20:57.651 [2024-07-12 11:57:46.903366] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8a80) on tqpair(0xf48540): expected_datao=0, payload_size=8192 00:20:57.651 [2024-07-12 11:57:46.903374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903385] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903393] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.651 [2024-07-12 11:57:46.903411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.651 [2024-07-12 11:57:46.903418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903424] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=512, cccid=4 00:20:57.651 [2024-07-12 11:57:46.903436] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8920) on tqpair(0xf48540): expected_datao=0, payload_size=512 00:20:57.651 [2024-07-12 11:57:46.903444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903454] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903461] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.651 [2024-07-12 11:57:46.903479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.651 [2024-07-12 11:57:46.903486] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903493] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=512, cccid=6 00:20:57.651 [2024-07-12 11:57:46.903500] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8be0) on tqpair(0xf48540): expected_datao=0, payload_size=512 00:20:57.651 [2024-07-12 11:57:46.903507] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903516] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903524] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903532] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:57.651 [2024-07-12 11:57:46.903541] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:57.651 [2024-07-12 11:57:46.903548] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903554] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf48540): datao=0, datal=4096, cccid=7 00:20:57.651 [2024-07-12 11:57:46.903562] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfa8d40) on tqpair(0xf48540): expected_datao=0, payload_size=4096 00:20:57.651 [2024-07-12 11:57:46.903569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903579] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903587] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.903624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.651 [2024-07-12 11:57:46.903631] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8a80) on tqpair=0xf48540 00:20:57.651 [2024-07-12 11:57:46.903656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.903682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.651 [2024-07-12 11:57:46.903689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.651 [2024-07-12 11:57:46.903695] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8920) on tqpair=0xf48540 00:20:57.651 [2024-07-12 11:57:46.903709] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.651 [2024-07-12 11:57:46.903720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.652 [2024-07-12 11:57:46.903726] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.652 [2024-07-12 11:57:46.903732] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8be0) on tqpair=0xf48540 00:20:57.652 [2024-07-12 11:57:46.903745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.652 [2024-07-12 11:57:46.903756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.652 [2024-07-12 11:57:46.903762] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.652 [2024-07-12 11:57:46.903769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8d40) on tqpair=0xf48540 00:20:57.652 ===================================================== 00:20:57.652 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.652 ===================================================== 00:20:57.652 Controller Capabilities/Features 00:20:57.652 ================================ 00:20:57.652 Vendor ID: 8086 00:20:57.652 Subsystem Vendor ID: 8086 00:20:57.652 Serial Number: SPDK00000000000001 00:20:57.652 Model Number: SPDK bdev Controller 00:20:57.652 Firmware Version: 24.09 00:20:57.652 Recommended Arb Burst: 6 00:20:57.652 IEEE OUI Identifier: e4 d2 5c 00:20:57.652 Multi-path I/O 00:20:57.652 May have multiple subsystem ports: Yes 00:20:57.652 May have multiple controllers: Yes 00:20:57.652 Associated with SR-IOV VF: No 00:20:57.652 Max Data Transfer Size: 131072 00:20:57.652 Max Number of Namespaces: 32 00:20:57.652 Max Number of I/O Queues: 127 00:20:57.652 NVMe Specification Version (VS): 1.3 00:20:57.652 NVMe Specification Version (Identify): 1.3 00:20:57.652 Maximum Queue Entries: 128 00:20:57.652 Contiguous Queues Required: Yes 00:20:57.652 Arbitration Mechanisms Supported 00:20:57.652 Weighted Round Robin: Not Supported 00:20:57.652 Vendor Specific: Not Supported 00:20:57.652 Reset Timeout: 15000 ms 00:20:57.652 Doorbell Stride: 4 bytes 00:20:57.652 NVM Subsystem Reset: Not Supported 00:20:57.652 Command Sets Supported 00:20:57.652 NVM Command Set: Supported 00:20:57.652 Boot Partition: Not Supported 00:20:57.652 Memory Page Size Minimum: 4096 bytes 00:20:57.652 Memory Page Size Maximum: 4096 bytes 00:20:57.652 Persistent Memory Region: Not Supported 00:20:57.652 Optional Asynchronous Events Supported 00:20:57.652 Namespace Attribute Notices: Supported 00:20:57.652 Firmware Activation Notices: Not Supported 00:20:57.652 ANA Change Notices: Not Supported 00:20:57.652 PLE Aggregate Log Change Notices: Not Supported 00:20:57.652 LBA Status Info Alert Notices: Not Supported 00:20:57.652 EGE Aggregate Log Change Notices: Not Supported 00:20:57.652 Normal NVM Subsystem Shutdown event: Not Supported 00:20:57.652 Zone Descriptor Change Notices: Not Supported 00:20:57.652 Discovery Log Change Notices: Not Supported 00:20:57.652 Controller Attributes 00:20:57.652 128-bit Host Identifier: Supported 00:20:57.652 Non-Operational Permissive Mode: Not Supported 00:20:57.652 NVM Sets: Not Supported 00:20:57.652 Read Recovery Levels: Not Supported 00:20:57.652 Endurance Groups: Not Supported 00:20:57.652 Predictable Latency Mode: Not Supported 00:20:57.652 Traffic Based Keep ALive: Not Supported 00:20:57.652 Namespace Granularity: Not Supported 00:20:57.652 SQ Associations: Not Supported 00:20:57.652 UUID List: Not Supported 00:20:57.652 Multi-Domain Subsystem: Not Supported 00:20:57.652 Fixed Capacity Management: Not Supported 00:20:57.652 Variable Capacity Management: Not Supported 00:20:57.652 Delete Endurance Group: Not Supported 00:20:57.652 Delete NVM Set: Not Supported 00:20:57.652 Extended LBA Formats Supported: Not Supported 00:20:57.652 Flexible Data Placement Supported: Not Supported 00:20:57.652 00:20:57.652 Controller Memory Buffer Support 00:20:57.652 ================================ 00:20:57.652 Supported: No 00:20:57.652 00:20:57.652 Persistent Memory Region Support 00:20:57.652 ================================ 00:20:57.652 Supported: No 00:20:57.652 00:20:57.652 Admin Command Set Attributes 00:20:57.652 ============================ 00:20:57.652 Security Send/Receive: Not Supported 00:20:57.652 Format NVM: Not Supported 00:20:57.652 Firmware Activate/Download: Not Supported 00:20:57.652 Namespace Management: Not Supported 00:20:57.652 Device Self-Test: Not Supported 00:20:57.652 Directives: Not Supported 00:20:57.652 NVMe-MI: Not Supported 00:20:57.652 Virtualization Management: Not Supported 00:20:57.652 Doorbell Buffer Config: Not Supported 00:20:57.652 Get LBA Status Capability: Not Supported 00:20:57.652 Command & Feature Lockdown Capability: Not Supported 00:20:57.652 Abort Command Limit: 4 00:20:57.652 Async Event Request Limit: 4 00:20:57.652 Number of Firmware Slots: N/A 00:20:57.652 Firmware Slot 1 Read-Only: N/A 00:20:57.652 Firmware Activation Without Reset: N/A 00:20:57.652 Multiple Update Detection Support: N/A 00:20:57.652 Firmware Update Granularity: No Information Provided 00:20:57.652 Per-Namespace SMART Log: No 00:20:57.652 Asymmetric Namespace Access Log Page: Not Supported 00:20:57.652 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:57.652 Command Effects Log Page: Supported 00:20:57.652 Get Log Page Extended Data: Supported 00:20:57.652 Telemetry Log Pages: Not Supported 00:20:57.652 Persistent Event Log Pages: Not Supported 00:20:57.652 Supported Log Pages Log Page: May Support 00:20:57.652 Commands Supported & Effects Log Page: Not Supported 00:20:57.652 Feature Identifiers & Effects Log Page:May Support 00:20:57.652 NVMe-MI Commands & Effects Log Page: May Support 00:20:57.652 Data Area 4 for Telemetry Log: Not Supported 00:20:57.652 Error Log Page Entries Supported: 128 00:20:57.652 Keep Alive: Supported 00:20:57.652 Keep Alive Granularity: 10000 ms 00:20:57.652 00:20:57.652 NVM Command Set Attributes 00:20:57.652 ========================== 00:20:57.652 Submission Queue Entry Size 00:20:57.652 Max: 64 00:20:57.652 Min: 64 00:20:57.652 Completion Queue Entry Size 00:20:57.652 Max: 16 00:20:57.652 Min: 16 00:20:57.652 Number of Namespaces: 32 00:20:57.652 Compare Command: Supported 00:20:57.652 Write Uncorrectable Command: Not Supported 00:20:57.652 Dataset Management Command: Supported 00:20:57.652 Write Zeroes Command: Supported 00:20:57.652 Set Features Save Field: Not Supported 00:20:57.652 Reservations: Supported 00:20:57.652 Timestamp: Not Supported 00:20:57.652 Copy: Supported 00:20:57.652 Volatile Write Cache: Present 00:20:57.652 Atomic Write Unit (Normal): 1 00:20:57.652 Atomic Write Unit (PFail): 1 00:20:57.652 Atomic Compare & Write Unit: 1 00:20:57.652 Fused Compare & Write: Supported 00:20:57.652 Scatter-Gather List 00:20:57.652 SGL Command Set: Supported 00:20:57.652 SGL Keyed: Supported 00:20:57.652 SGL Bit Bucket Descriptor: Not Supported 00:20:57.652 SGL Metadata Pointer: Not Supported 00:20:57.652 Oversized SGL: Not Supported 00:20:57.652 SGL Metadata Address: Not Supported 00:20:57.652 SGL Offset: Supported 00:20:57.652 Transport SGL Data Block: Not Supported 00:20:57.652 Replay Protected Memory Block: Not Supported 00:20:57.652 00:20:57.652 Firmware Slot Information 00:20:57.653 ========================= 00:20:57.653 Active slot: 1 00:20:57.653 Slot 1 Firmware Revision: 24.09 00:20:57.653 00:20:57.653 00:20:57.653 Commands Supported and Effects 00:20:57.653 ============================== 00:20:57.653 Admin Commands 00:20:57.653 -------------- 00:20:57.653 Get Log Page (02h): Supported 00:20:57.653 Identify (06h): Supported 00:20:57.653 Abort (08h): Supported 00:20:57.653 Set Features (09h): Supported 00:20:57.653 Get Features (0Ah): Supported 00:20:57.653 Asynchronous Event Request (0Ch): Supported 00:20:57.653 Keep Alive (18h): Supported 00:20:57.653 I/O Commands 00:20:57.653 ------------ 00:20:57.653 Flush (00h): Supported LBA-Change 00:20:57.653 Write (01h): Supported LBA-Change 00:20:57.653 Read (02h): Supported 00:20:57.653 Compare (05h): Supported 00:20:57.653 Write Zeroes (08h): Supported LBA-Change 00:20:57.653 Dataset Management (09h): Supported LBA-Change 00:20:57.653 Copy (19h): Supported LBA-Change 00:20:57.653 Unknown (79h): Supported LBA-Change 00:20:57.653 Unknown (7Ah): Supported 00:20:57.653 00:20:57.653 Error Log 00:20:57.653 ========= 00:20:57.653 00:20:57.653 Arbitration 00:20:57.653 =========== 00:20:57.653 Arbitration Burst: 1 00:20:57.653 00:20:57.653 Power Management 00:20:57.653 ================ 00:20:57.653 Number of Power States: 1 00:20:57.653 Current Power State: Power State #0 00:20:57.653 Power State #0: 00:20:57.653 Max Power: 0.00 W 00:20:57.653 Non-Operational State: Operational 00:20:57.653 Entry Latency: Not Reported 00:20:57.653 Exit Latency: Not Reported 00:20:57.653 Relative Read Throughput: 0 00:20:57.653 Relative Read Latency: 0 00:20:57.653 Relative Write Throughput: 0 00:20:57.653 Relative Write Latency: 0 00:20:57.653 Idle Power: Not Reported 00:20:57.653 Active Power: Not Reported 00:20:57.653 Non-Operational Permissive Mode: Not Supported 00:20:57.653 00:20:57.653 Health Information 00:20:57.653 ================== 00:20:57.653 Critical Warnings: 00:20:57.653 Available Spare Space: OK 00:20:57.653 Temperature: OK 00:20:57.653 Device Reliability: OK 00:20:57.653 Read Only: No 00:20:57.653 Volatile Memory Backup: OK 00:20:57.653 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:57.653 Temperature Threshold: [2024-07-12 11:57:46.903940] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.903953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf48540) 00:20:57.653 [2024-07-12 11:57:46.903967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.653 [2024-07-12 11:57:46.903991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa8d40, cid 7, qid 0 00:20:57.653 [2024-07-12 11:57:46.904117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.653 [2024-07-12 11:57:46.904132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.653 [2024-07-12 11:57:46.904140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa8d40) on tqpair=0xf48540 00:20:57.653 [2024-07-12 11:57:46.904185] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:57.653 [2024-07-12 11:57:46.904207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.653 [2024-07-12 11:57:46.904220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.653 [2024-07-12 11:57:46.904230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.653 [2024-07-12 11:57:46.904239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.653 [2024-07-12 11:57:46.904253] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.653 [2024-07-12 11:57:46.904279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.653 [2024-07-12 11:57:46.904315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.653 [2024-07-12 11:57:46.904480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.653 [2024-07-12 11:57:46.904493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.653 [2024-07-12 11:57:46.904501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904508] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.653 [2024-07-12 11:57:46.904519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.653 [2024-07-12 11:57:46.904545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.653 [2024-07-12 11:57:46.904571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.653 [2024-07-12 11:57:46.904661] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.653 [2024-07-12 11:57:46.904673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.653 [2024-07-12 11:57:46.904681] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904688] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.653 [2024-07-12 11:57:46.904696] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:57.653 [2024-07-12 11:57:46.904705] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:57.653 [2024-07-12 11:57:46.904721] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904730] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904737] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.653 [2024-07-12 11:57:46.904748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.653 [2024-07-12 11:57:46.904773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.653 [2024-07-12 11:57:46.904852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.653 [2024-07-12 11:57:46.904875] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.653 [2024-07-12 11:57:46.904884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.653 [2024-07-12 11:57:46.904909] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.904926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.653 [2024-07-12 11:57:46.904937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.653 [2024-07-12 11:57:46.904958] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.653 [2024-07-12 11:57:46.905040] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.653 [2024-07-12 11:57:46.905052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.653 [2024-07-12 11:57:46.905059] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.653 [2024-07-12 11:57:46.905067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.653 [2024-07-12 11:57:46.905083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905100] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905131] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.905214] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.905229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.905236] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.905260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.905383] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.905398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.905405] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905412] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.905428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905445] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.905559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.905572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.905579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.905603] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905613] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.905724] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.905737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.905744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.905767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905777] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.905897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.905911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.905918] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.905942] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.905959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.905970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.905991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.906067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.906079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.906087] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906093] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.906109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.906137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.906158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.906238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.906252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.906260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906267] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.906283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.906311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.906332] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.906409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.654 [2024-07-12 11:57:46.906422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.654 [2024-07-12 11:57:46.906429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.654 [2024-07-12 11:57:46.906452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.654 [2024-07-12 11:57:46.906469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.654 [2024-07-12 11:57:46.906480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.654 [2024-07-12 11:57:46.906501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.654 [2024-07-12 11:57:46.906582] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.655 [2024-07-12 11:57:46.906596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.656 [2024-07-12 11:57:46.906604] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906611] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.656 [2024-07-12 11:57:46.906627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906637] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.656 [2024-07-12 11:57:46.906655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.656 [2024-07-12 11:57:46.906676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.656 [2024-07-12 11:57:46.906749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.656 [2024-07-12 11:57:46.906762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.656 [2024-07-12 11:57:46.906769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.656 [2024-07-12 11:57:46.906793] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.906809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.656 [2024-07-12 11:57:46.906820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.656 [2024-07-12 11:57:46.906841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.656 [2024-07-12 11:57:46.910892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.656 [2024-07-12 11:57:46.910913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.656 [2024-07-12 11:57:46.910921] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.910927] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.656 [2024-07-12 11:57:46.910945] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.910971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.910978] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf48540) 00:20:57.656 [2024-07-12 11:57:46.910989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.656 [2024-07-12 11:57:46.911012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfa87c0, cid 3, qid 0 00:20:57.656 [2024-07-12 11:57:46.911130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:57.656 [2024-07-12 11:57:46.911143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:57.656 [2024-07-12 11:57:46.911150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:57.656 [2024-07-12 11:57:46.911157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfa87c0) on tqpair=0xf48540 00:20:57.656 [2024-07-12 11:57:46.911171] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:57.656 0 Kelvin (-273 Celsius) 00:20:57.656 Available Spare: 0% 00:20:57.656 Available Spare Threshold: 0% 00:20:57.656 Life Percentage Used: 0% 00:20:57.656 Data Units Read: 0 00:20:57.656 Data Units Written: 0 00:20:57.656 Host Read Commands: 0 00:20:57.656 Host Write Commands: 0 00:20:57.656 Controller Busy Time: 0 minutes 00:20:57.656 Power Cycles: 0 00:20:57.656 Power On Hours: 0 hours 00:20:57.656 Unsafe Shutdowns: 0 00:20:57.656 Unrecoverable Media Errors: 0 00:20:57.656 Lifetime Error Log Entries: 0 00:20:57.656 Warning Temperature Time: 0 minutes 00:20:57.656 Critical Temperature Time: 0 minutes 00:20:57.656 00:20:57.656 Number of Queues 00:20:57.656 ================ 00:20:57.656 Number of I/O Submission Queues: 127 00:20:57.656 Number of I/O Completion Queues: 127 00:20:57.656 00:20:57.656 Active Namespaces 00:20:57.656 ================= 00:20:57.656 Namespace ID:1 00:20:57.656 Error Recovery Timeout: Unlimited 00:20:57.656 Command Set Identifier: NVM (00h) 00:20:57.656 Deallocate: Supported 00:20:57.656 Deallocated/Unwritten Error: Not Supported 00:20:57.656 Deallocated Read Value: Unknown 00:20:57.656 Deallocate in Write Zeroes: Not Supported 00:20:57.656 Deallocated Guard Field: 0xFFFF 00:20:57.656 Flush: Supported 00:20:57.656 Reservation: Supported 00:20:57.656 Namespace Sharing Capabilities: Multiple Controllers 00:20:57.656 Size (in LBAs): 131072 (0GiB) 00:20:57.656 Capacity (in LBAs): 131072 (0GiB) 00:20:57.656 Utilization (in LBAs): 131072 (0GiB) 00:20:57.656 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:57.656 EUI64: ABCDEF0123456789 00:20:57.656 UUID: 2c2e33a3-9dde-43af-ab87-75cf14e333a7 00:20:57.656 Thin Provisioning: Not Supported 00:20:57.656 Per-NS Atomic Units: Yes 00:20:57.656 Atomic Boundary Size (Normal): 0 00:20:57.656 Atomic Boundary Size (PFail): 0 00:20:57.656 Atomic Boundary Offset: 0 00:20:57.656 Maximum Single Source Range Length: 65535 00:20:57.656 Maximum Copy Length: 65535 00:20:57.656 Maximum Source Range Count: 1 00:20:57.656 NGUID/EUI64 Never Reused: No 00:20:57.656 Namespace Write Protected: No 00:20:57.656 Number of LBA Formats: 1 00:20:57.656 Current LBA Format: LBA Format #00 00:20:57.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:57.656 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.656 rmmod nvme_tcp 00:20:57.656 rmmod nvme_fabrics 00:20:57.656 rmmod nvme_keyring 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 979750 ']' 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 979750 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 979750 ']' 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 979750 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:57.656 11:57:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 979750 00:20:57.656 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:57.656 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:57.656 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 979750' 00:20:57.656 killing process with pid 979750 00:20:57.656 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 979750 00:20:57.656 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 979750 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.916 11:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.449 11:57:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.449 00:21:00.449 real 0m5.557s 00:21:00.449 user 0m4.687s 00:21:00.449 sys 0m1.903s 00:21:00.449 11:57:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:00.449 11:57:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.449 ************************************ 00:21:00.449 END TEST nvmf_identify 00:21:00.449 ************************************ 00:21:00.449 11:57:49 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:00.449 11:57:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:00.449 11:57:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:00.449 11:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.449 ************************************ 00:21:00.449 START TEST nvmf_perf 00:21:00.449 ************************************ 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:00.449 * Looking for test storage... 00:21:00.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.449 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.450 11:57:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.381 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:21:02.382 00:21:02.382 --- 10.0.0.2 ping statistics --- 00:21:02.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.382 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:21:02.382 00:21:02.382 --- 10.0.0.1 ping statistics --- 00:21:02.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.382 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=981843 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 981843 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 981843 ']' 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:02.382 11:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.382 [2024-07-12 11:57:51.789265] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:21:02.382 [2024-07-12 11:57:51.789362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.382 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.382 [2024-07-12 11:57:51.861011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.640 [2024-07-12 11:57:51.980550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.640 [2024-07-12 11:57:51.980603] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.640 [2024-07-12 11:57:51.980619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.640 [2024-07-12 11:57:51.980632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.640 [2024-07-12 11:57:51.980644] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.640 [2024-07-12 11:57:51.981889] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.640 [2024-07-12 11:57:51.981919] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.640 [2024-07-12 11:57:51.982037] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.640 [2024-07-12 11:57:51.982040] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:02.640 11:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:05.917 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:05.917 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:06.175 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:06.175 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:06.433 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:06.433 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:06.433 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:06.433 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:06.433 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.433 [2024-07-12 11:57:55.904010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.690 11:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.690 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:06.690 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:06.947 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:06.947 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:07.204 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.462 [2024-07-12 11:57:56.907709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.462 11:57:56 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:07.719 11:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:07.719 11:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:07.719 11:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:07.719 11:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:09.090 Initializing NVMe Controllers 00:21:09.090 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:09.090 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:09.090 Initialization complete. Launching workers. 00:21:09.090 ======================================================== 00:21:09.090 Latency(us) 00:21:09.090 Device Information : IOPS MiB/s Average min max 00:21:09.090 PCIE (0000:88:00.0) NSID 1 from core 0: 84597.58 330.46 377.74 43.20 7240.19 00:21:09.090 ======================================================== 00:21:09.090 Total : 84597.58 330.46 377.74 43.20 7240.19 00:21:09.090 00:21:09.090 11:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:09.090 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.464 Initializing NVMe Controllers 00:21:10.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:10.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:10.464 Initialization complete. Launching workers. 00:21:10.464 ======================================================== 00:21:10.464 Latency(us) 00:21:10.464 Device Information : IOPS MiB/s Average min max 00:21:10.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 126.00 0.49 8296.59 150.74 45835.25 00:21:10.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20497.83 7927.81 47917.87 00:21:10.464 ======================================================== 00:21:10.464 Total : 177.00 0.69 11812.20 150.74 47917.87 00:21:10.464 00:21:10.464 11:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.464 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.839 Initializing NVMe Controllers 00:21:11.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:11.839 Initialization complete. Launching workers. 00:21:11.839 ======================================================== 00:21:11.839 Latency(us) 00:21:11.839 Device Information : IOPS MiB/s Average min max 00:21:11.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8566.67 33.46 3736.37 576.67 7854.15 00:21:11.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3941.85 15.40 8160.88 6810.24 15979.01 00:21:11.839 ======================================================== 00:21:11.839 Total : 12508.51 48.86 5130.68 576.67 15979.01 00:21:11.839 00:21:11.839 11:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:11.839 11:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:11.839 11:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:11.839 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.410 Initializing NVMe Controllers 00:21:14.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.410 Controller IO queue size 128, less than required. 00:21:14.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:14.410 Controller IO queue size 128, less than required. 00:21:14.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:14.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:14.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:14.410 Initialization complete. Launching workers. 00:21:14.410 ======================================================== 00:21:14.410 Latency(us) 00:21:14.410 Device Information : IOPS MiB/s Average min max 00:21:14.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1750.68 437.67 74119.41 45353.78 127510.25 00:21:14.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.74 144.19 228534.82 77268.39 312362.72 00:21:14.410 ======================================================== 00:21:14.410 Total : 2327.42 581.86 112383.98 45353.78 312362.72 00:21:14.410 00:21:14.410 11:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:14.410 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.410 No valid NVMe controllers or AIO or URING devices found 00:21:14.410 Initializing NVMe Controllers 00:21:14.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.410 Controller IO queue size 128, less than required. 00:21:14.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:14.410 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:14.410 Controller IO queue size 128, less than required. 00:21:14.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:14.410 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:14.410 WARNING: Some requested NVMe devices were skipped 00:21:14.410 11:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:14.410 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.936 Initializing NVMe Controllers 00:21:16.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.936 Controller IO queue size 128, less than required. 00:21:16.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.936 Controller IO queue size 128, less than required. 00:21:16.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:16.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:16.936 Initialization complete. Launching workers. 00:21:16.936 00:21:16.936 ==================== 00:21:16.936 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:16.936 TCP transport: 00:21:16.936 polls: 11711 00:21:16.936 idle_polls: 8251 00:21:16.936 sock_completions: 3460 00:21:16.936 nvme_completions: 6189 00:21:16.936 submitted_requests: 9192 00:21:16.936 queued_requests: 1 00:21:16.936 00:21:16.936 ==================== 00:21:16.936 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:16.936 TCP transport: 00:21:16.936 polls: 9312 00:21:16.936 idle_polls: 5872 00:21:16.936 sock_completions: 3440 00:21:16.936 nvme_completions: 6359 00:21:16.936 submitted_requests: 9530 00:21:16.936 queued_requests: 1 00:21:16.936 ======================================================== 00:21:16.936 Latency(us) 00:21:16.936 Device Information : IOPS MiB/s Average min max 00:21:16.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1546.97 386.74 84230.85 53106.30 135164.55 00:21:16.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1589.47 397.37 81848.84 43008.75 129461.63 00:21:16.936 ======================================================== 00:21:16.936 Total : 3136.44 784.11 83023.71 43008.75 135164.55 00:21:16.936 00:21:16.936 11:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:16.936 11:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.194 rmmod nvme_tcp 00:21:17.194 rmmod nvme_fabrics 00:21:17.194 rmmod nvme_keyring 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 981843 ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 981843 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 981843 ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 981843 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 981843 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 981843' 00:21:17.194 killing process with pid 981843 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 981843 00:21:17.194 11:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 981843 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.127 11:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.030 11:58:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.030 00:21:21.030 real 0m20.933s 00:21:21.030 user 1m4.219s 00:21:21.030 sys 0m5.220s 00:21:21.030 11:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:21.030 11:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:21.030 ************************************ 00:21:21.030 END TEST nvmf_perf 00:21:21.030 ************************************ 00:21:21.030 11:58:10 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:21.030 11:58:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:21.030 11:58:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:21.030 11:58:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.030 ************************************ 00:21:21.030 START TEST nvmf_fio_host 00:21:21.030 ************************************ 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:21.030 * Looking for test storage... 00:21:21.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:21.030 11:58:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.031 11:58:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:22.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:22.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:22.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:22.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.933 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:23.192 00:21:23.192 --- 10.0.0.2 ping statistics --- 00:21:23.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.192 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:23.192 00:21:23.192 --- 10.0.0.1 ping statistics --- 00:21:23.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.192 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:23.192 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=985736 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 985736 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 985736 ']' 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:23.193 11:58:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.193 [2024-07-12 11:58:12.577305] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:21:23.193 [2024-07-12 11:58:12.577392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.193 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.193 [2024-07-12 11:58:12.646360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.450 [2024-07-12 11:58:12.765773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.450 [2024-07-12 11:58:12.765828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.450 [2024-07-12 11:58:12.765844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.450 [2024-07-12 11:58:12.765858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.450 [2024-07-12 11:58:12.765879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.451 [2024-07-12 11:58:12.765968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.451 [2024-07-12 11:58:12.765996] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.451 [2024-07-12 11:58:12.766046] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.451 [2024-07-12 11:58:12.766049] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.017 11:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:24.017 11:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:21:24.017 11:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:24.273 [2024-07-12 11:58:13.725203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.273 11:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:24.273 11:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:24.273 11:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.532 11:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:24.532 Malloc1 00:21:24.789 11:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.789 11:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:25.047 11:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.304 [2024-07-12 11:58:14.752749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.304 11:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:25.561 11:58:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.819 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:25.819 fio-3.35 00:21:25.819 Starting 1 thread 00:21:25.819 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.359 00:21:28.359 test: (groupid=0, jobs=1): err= 0: pid=986106: Fri Jul 12 11:58:17 2024 00:21:28.359 read: IOPS=9020, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2006msec) 00:21:28.359 slat (nsec): min=1916, max=152385, avg=2429.27, stdev=1758.80 00:21:28.359 clat (usec): min=2468, max=13065, avg=7721.84, stdev=645.88 00:21:28.359 lat (usec): min=2492, max=13067, avg=7724.27, stdev=645.78 00:21:28.359 clat percentiles (usec): 00:21:28.359 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:21:28.359 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:21:28.359 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:21:28.359 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11076], 99.95th=[11863], 00:21:28.359 | 99.99th=[12125] 00:21:28.359 bw ( KiB/s): min=35016, max=36880, per=99.92%, avg=36054.00, stdev=770.76, samples=4 00:21:28.359 iops : min= 8754, max= 9220, avg=9013.50, stdev=192.69, samples=4 00:21:28.359 write: IOPS=9040, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec); 0 zone resets 00:21:28.359 slat (usec): min=2, max=126, avg= 2.54, stdev= 1.27 00:21:28.359 clat (usec): min=1333, max=12167, avg=6391.18, stdev=529.98 00:21:28.359 lat (usec): min=1342, max=12169, avg=6393.71, stdev=529.92 00:21:28.359 clat percentiles (usec): 00:21:28.359 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:21:28.359 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:21:28.359 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:21:28.359 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10683], 99.95th=[11207], 00:21:28.359 | 99.99th=[11994] 00:21:28.359 bw ( KiB/s): min=35856, max=36376, per=99.98%, avg=36154.00, stdev=253.10, samples=4 00:21:28.359 iops : min= 8964, max= 9094, avg=9038.50, stdev=63.27, samples=4 00:21:28.359 lat (msec) : 2=0.03%, 4=0.11%, 10=99.70%, 20=0.16% 00:21:28.359 cpu : usr=64.64%, sys=33.32%, ctx=64, majf=0, minf=40 00:21:28.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:28.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:28.359 issued rwts: total=18096,18135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:28.359 00:21:28.359 Run status group 0 (all jobs): 00:21:28.359 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2006-2006msec 00:21:28.359 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:28.359 11:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:28.359 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:28.359 fio-3.35 00:21:28.359 Starting 1 thread 00:21:28.359 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.888 00:21:30.888 test: (groupid=0, jobs=1): err= 0: pid=986551: Fri Jul 12 11:58:20 2024 00:21:30.888 read: IOPS=8299, BW=130MiB/s (136MB/s)(260MiB/2006msec) 00:21:30.888 slat (usec): min=2, max=101, avg= 3.54, stdev= 1.72 00:21:30.888 clat (usec): min=2336, max=51309, avg=8982.90, stdev=3948.94 00:21:30.888 lat (usec): min=2339, max=51312, avg=8986.44, stdev=3948.95 00:21:30.888 clat percentiles (usec): 00:21:30.888 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 7046], 00:21:30.888 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:21:30.888 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11469], 95.00th=[12518], 00:21:30.888 | 99.00th=[16581], 99.50th=[46400], 99.90th=[50594], 99.95th=[51119], 00:21:30.888 | 99.99th=[51119] 00:21:30.888 bw ( KiB/s): min=55872, max=77088, per=50.47%, avg=67024.00, stdev=8882.81, samples=4 00:21:30.888 iops : min= 3492, max= 4818, avg=4189.00, stdev=555.18, samples=4 00:21:30.888 write: IOPS=4922, BW=76.9MiB/s (80.6MB/s)(137MiB/1787msec); 0 zone resets 00:21:30.888 slat (usec): min=30, max=140, avg=33.26, stdev= 4.54 00:21:30.888 clat (usec): min=5375, max=20566, avg=11391.00, stdev=1896.00 00:21:30.888 lat (usec): min=5406, max=20597, avg=11424.26, stdev=1895.98 00:21:30.888 clat percentiles (usec): 00:21:30.888 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:21:30.888 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:21:30.888 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13960], 95.00th=[14615], 00:21:30.888 | 99.00th=[16581], 99.50th=[17171], 99.90th=[19530], 99.95th=[20055], 00:21:30.888 | 99.99th=[20579] 00:21:30.888 bw ( KiB/s): min=58368, max=79648, per=88.84%, avg=69968.00, stdev=9315.55, samples=4 00:21:30.888 iops : min= 3648, max= 4978, avg=4373.00, stdev=582.22, samples=4 00:21:30.888 lat (msec) : 4=0.18%, 10=59.03%, 20=40.28%, 50=0.43%, 100=0.09% 00:21:30.888 cpu : usr=75.96%, sys=22.59%, ctx=55, majf=0, minf=64 00:21:30.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:30.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:30.888 issued rwts: total=16649,8796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:30.888 00:21:30.888 Run status group 0 (all jobs): 00:21:30.888 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2006-2006msec 00:21:30.888 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=137MiB (144MB), run=1787-1787msec 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.888 rmmod nvme_tcp 00:21:30.888 rmmod nvme_fabrics 00:21:30.888 rmmod nvme_keyring 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 985736 ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 985736 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 985736 ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 985736 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 985736 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 985736' 00:21:30.888 killing process with pid 985736 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 985736 00:21:30.888 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 985736 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.456 11:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.390 11:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.390 00:21:33.390 real 0m12.293s 00:21:33.390 user 0m36.548s 00:21:33.390 sys 0m3.934s 00:21:33.390 11:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:33.390 11:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.390 ************************************ 00:21:33.390 END TEST nvmf_fio_host 00:21:33.390 ************************************ 00:21:33.390 11:58:22 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:33.390 11:58:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:33.390 11:58:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:33.390 11:58:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.390 ************************************ 00:21:33.390 START TEST nvmf_failover 00:21:33.390 ************************************ 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:33.390 * Looking for test storage... 00:21:33.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.390 11:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.292 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:35.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:35.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:35.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:35.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.293 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.551 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.551 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.551 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.551 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:21:35.551 00:21:35.551 --- 10.0.0.2 ping statistics --- 00:21:35.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.551 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:35.551 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:21:35.551 00:21:35.551 --- 10.0.0.1 ping statistics --- 00:21:35.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.552 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=988750 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 988750 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 988750 ']' 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.552 11:58:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.552 [2024-07-12 11:58:24.910279] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:21:35.552 [2024-07-12 11:58:24.910350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.552 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.552 [2024-07-12 11:58:24.972448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.810 [2024-07-12 11:58:25.088669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.810 [2024-07-12 11:58:25.088724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.810 [2024-07-12 11:58:25.088753] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.810 [2024-07-12 11:58:25.088765] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.810 [2024-07-12 11:58:25.088775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.810 [2024-07-12 11:58:25.088873] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.810 [2024-07-12 11:58:25.088931] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.810 [2024-07-12 11:58:25.088935] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.810 11:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:36.068 [2024-07-12 11:58:25.454752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.068 11:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:36.325 Malloc0 00:21:36.325 11:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.583 11:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:36.841 11:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.099 [2024-07-12 11:58:26.494029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.099 11:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.357 [2024-07-12 11:58:26.758800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.357 11:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:37.615 [2024-07-12 11:58:27.007666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=989038 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 989038 /var/tmp/bdevperf.sock 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 989038 ']' 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:37.615 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:37.872 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:37.872 11:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:21:37.872 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.441 NVMe0n1 00:21:38.441 11:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.011 00:21:39.011 11:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=989178 00:21:39.011 11:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.011 11:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:39.948 11:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.208 [2024-07-12 11:58:29.513651] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.513996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.209 [2024-07-12 11:58:29.514827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.514992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515018] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 [2024-07-12 11:58:29.515192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e709a0 is same with the state(5) to be set 00:21:40.210 11:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:43.500 11:58:32 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.758 00:21:43.758 11:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.758 [2024-07-12 11:58:33.251288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720c0 is same with the state(5) to be set 00:21:43.758 [2024-07-12 11:58:33.251358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720c0 is same with the state(5) to be set 00:21:43.758 [2024-07-12 11:58:33.251374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720c0 is same with the state(5) to be set 00:21:43.758 [2024-07-12 11:58:33.251387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720c0 is same with the state(5) to be set 00:21:44.014 11:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:47.294 11:58:36 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.294 [2024-07-12 11:58:36.497327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.294 11:58:36 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:48.224 11:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:48.481 [2024-07-12 11:58:37.792726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 [2024-07-12 11:58:37.792891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e727c0 is same with the state(5) to be set 00:21:48.482 11:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 989178 00:21:55.041 0 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 989038 ']' 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 989038' 00:21:55.041 killing process with pid 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 989038 00:21:55.041 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:55.041 [2024-07-12 11:58:27.067686] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:21:55.041 [2024-07-12 11:58:27.067778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989038 ] 00:21:55.041 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.041 [2024-07-12 11:58:27.131228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.041 [2024-07-12 11:58:27.244271] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.041 Running I/O for 15 seconds... 00:21:55.041 [2024-07-12 11:58:29.516100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.041 [2024-07-12 11:58:29.516938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.041 [2024-07-12 11:58:29.516953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.516968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.516982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.516997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.517978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.517994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.518007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.042 [2024-07-12 11:58:29.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.042 [2024-07-12 11:58:29.518065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.042 [2024-07-12 11:58:29.518094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.042 [2024-07-12 11:58:29.518123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.042 [2024-07-12 11:58:29.518152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.042 [2024-07-12 11:58:29.518167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.042 [2024-07-12 11:58:29.518181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.518975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.518989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.043 [2024-07-12 11:58:29.519404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.043 [2024-07-12 11:58:29.519418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.044 [2024-07-12 11:58:29.519898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.519926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.044 [2024-07-12 11:58:29.519941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.044 [2024-07-12 11:58:29.519953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:21:55.044 [2024-07-12 11:58:29.519971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.520029] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cdc270 was disconnected and freed. reset controller. 00:21:55.044 [2024-07-12 11:58:29.520048] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:55.044 [2024-07-12 11:58:29.520079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.044 [2024-07-12 11:58:29.520097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.520112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.044 [2024-07-12 11:58:29.520125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.520139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.044 [2024-07-12 11:58:29.520152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.520166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.044 [2024-07-12 11:58:29.520179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:29.520192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.044 [2024-07-12 11:58:29.520237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd2a0 (9): Bad file descriptor 00:21:55.044 [2024-07-12 11:58:29.523482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.044 [2024-07-12 11:58:29.556722] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:55.044 [2024-07-12 11:58:33.252782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.252848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.252897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.252931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.252972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.252988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.044 [2024-07-12 11:58:33.253492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.044 [2024-07-12 11:58:33.253506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.253971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.253985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.045 [2024-07-12 11:58:33.254180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.045 [2024-07-12 11:58:33.254598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.045 [2024-07-12 11:58:33.254614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.254974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.046 [2024-07-12 11:58:33.255846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.046 [2024-07-12 11:58:33.255860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.255900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.255915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.255935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.255964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.255979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.255993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.256022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.256051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.256079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.047 [2024-07-12 11:58:33.256113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.256957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.256970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.256983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.256994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.257005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.257017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.257030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.257041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.257052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.257064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.257077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.047 [2024-07-12 11:58:33.257088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.047 [2024-07-12 11:58:33.257099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:21:55.047 [2024-07-12 11:58:33.257117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.047 [2024-07-12 11:58:33.257130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.048 [2024-07-12 11:58:33.257141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.048 [2024-07-12 11:58:33.257156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:21:55.048 [2024-07-12 11:58:33.257169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.048 [2024-07-12 11:58:33.257192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.048 [2024-07-12 11:58:33.257204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:21:55.048 [2024-07-12 11:58:33.257216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.048 [2024-07-12 11:58:33.257239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.048 [2024-07-12 11:58:33.257250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:21:55.048 [2024-07-12 11:58:33.257266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.048 [2024-07-12 11:58:33.257291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.048 [2024-07-12 11:58:33.257302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:21:55.048 [2024-07-12 11:58:33.257314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257371] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e86b00 was disconnected and freed. reset controller. 00:21:55.048 [2024-07-12 11:58:33.257389] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:55.048 [2024-07-12 11:58:33.257421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:33.257439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:33.257473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:33.257500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:33.257526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:33.257539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.048 [2024-07-12 11:58:33.257592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd2a0 (9): Bad file descriptor 00:21:55.048 [2024-07-12 11:58:33.260822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.048 [2024-07-12 11:58:33.419184] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:55.048 [2024-07-12 11:58:37.792800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:37.792842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.792860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:37.792883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.792899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:37.792912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.792926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.048 [2024-07-12 11:58:37.792939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.792962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbd2a0 is same with the state(5) to be set 00:21:55.048 [2024-07-12 11:58:37.793026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.048 [2024-07-12 11:58:37.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.048 [2024-07-12 11:58:37.793771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.793972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.793987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.049 [2024-07-12 11:58:37.794205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.049 [2024-07-12 11:58:37.794837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.049 [2024-07-12 11:58:37.794852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.794872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.794910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.794939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.794967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.794982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.050 [2024-07-12 11:58:37.795809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.795982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.795996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.796011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.796029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.796044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.796059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.796073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.796087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.050 [2024-07-12 11:58:37.796102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.050 [2024-07-12 11:58:37.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.051 [2024-07-12 11:58:37.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.051 [2024-07-12 11:58:37.796836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.051 [2024-07-12 11:58:37.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49840 len:8 PRP1 0x0 PRP2 0x0 00:21:55.051 [2024-07-12 11:58:37.796861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.051 [2024-07-12 11:58:37.796932] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce0660 was disconnected and freed. reset controller. 00:21:55.051 [2024-07-12 11:58:37.796952] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:55.051 [2024-07-12 11:58:37.796965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.051 [2024-07-12 11:58:37.800205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.051 [2024-07-12 11:58:37.800246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd2a0 (9): Bad file descriptor 00:21:55.051 [2024-07-12 11:58:37.838620] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:55.051 00:21:55.051 Latency(us) 00:21:55.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.051 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.051 Verification LBA range: start 0x0 length 0x4000 00:21:55.051 NVMe0n1 : 15.00 8650.72 33.79 578.29 0.00 13841.83 561.30 16019.91 00:21:55.051 =================================================================================================================== 00:21:55.051 Total : 8650.72 33.79 578.29 0.00 13841.83 561.30 16019.91 00:21:55.051 Received shutdown signal, test time was about 15.000000 seconds 00:21:55.051 00:21:55.051 Latency(us) 00:21:55.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.051 =================================================================================================================== 00:21:55.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=991006 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 991006 /var/tmp/bdevperf.sock 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 991006 ']' 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.051 11:58:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:55.051 11:58:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:55.051 11:58:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:21:55.051 11:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:55.051 [2024-07-12 11:58:44.270467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:55.051 11:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.051 [2024-07-12 11:58:44.511086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.309 11:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.567 NVMe0n1 00:21:55.567 11:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.857 00:21:55.857 11:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.423 00:21:56.423 11:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.423 11:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:56.680 11:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.938 11:58:46 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:00.310 11:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.310 11:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:00.310 11:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=991686 00:22:00.310 11:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.310 11:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 991686 00:22:01.243 0 00:22:01.243 11:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.243 [2024-07-12 11:58:43.781550] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:22:01.243 [2024-07-12 11:58:43.781638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991006 ] 00:22:01.243 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.243 [2024-07-12 11:58:43.840816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.243 [2024-07-12 11:58:43.946831] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.243 [2024-07-12 11:58:46.175350] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:01.243 [2024-07-12 11:58:46.175437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.243 [2024-07-12 11:58:46.175461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.243 [2024-07-12 11:58:46.175479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.243 [2024-07-12 11:58:46.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.243 [2024-07-12 11:58:46.175507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.243 [2024-07-12 11:58:46.175521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.243 [2024-07-12 11:58:46.175535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.243 [2024-07-12 11:58:46.175549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.243 [2024-07-12 11:58:46.175563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.243 [2024-07-12 11:58:46.175618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.243 [2024-07-12 11:58:46.175653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa12a0 (9): Bad file descriptor 00:22:01.243 [2024-07-12 11:58:46.185690] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.243 Running I/O for 1 seconds... 00:22:01.243 00:22:01.243 Latency(us) 00:22:01.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:01.244 Verification LBA range: start 0x0 length 0x4000 00:22:01.244 NVMe0n1 : 1.01 8773.84 34.27 0.00 0.00 14530.26 2827.76 15631.55 00:22:01.244 =================================================================================================================== 00:22:01.244 Total : 8773.84 34.27 0.00 0.00 14530.26 2827.76 15631.55 00:22:01.244 11:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.244 11:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:01.501 11:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.758 11:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.758 11:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:02.016 11:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.273 11:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:05.548 11:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.548 11:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:05.548 11:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 991006 00:22:05.548 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 991006 ']' 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 991006 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 991006 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 991006' 00:22:05.549 killing process with pid 991006 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 991006 00:22:05.549 11:58:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 991006 00:22:05.806 11:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:05.806 11:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.064 rmmod nvme_tcp 00:22:06.064 rmmod nvme_fabrics 00:22:06.064 rmmod nvme_keyring 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 988750 ']' 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 988750 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 988750 ']' 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 988750 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 988750 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 988750' 00:22:06.064 killing process with pid 988750 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 988750 00:22:06.064 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 988750 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.323 11:58:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.855 11:58:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.855 00:22:08.855 real 0m35.105s 00:22:08.855 user 2m3.950s 00:22:08.855 sys 0m5.813s 00:22:08.855 11:58:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:08.855 11:58:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:08.855 ************************************ 00:22:08.855 END TEST nvmf_failover 00:22:08.855 ************************************ 00:22:08.855 11:58:57 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.856 11:58:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:08.856 11:58:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:08.856 11:58:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.856 ************************************ 00:22:08.856 START TEST nvmf_host_discovery 00:22:08.856 ************************************ 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.856 * Looking for test storage... 00:22:08.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.856 11:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:10.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:10.755 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:10.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:10.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:10.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:22:10.755 00:22:10.755 --- 10.0.0.2 ping statistics --- 00:22:10.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.755 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:10.755 00:22:10.755 --- 10.0.0.1 ping statistics --- 00:22:10.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.755 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=994285 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 994285 00:22:10.755 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 994285 ']' 00:22:10.756 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.756 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:10.756 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.756 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:10.756 11:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.756 [2024-07-12 11:59:00.033962] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:22:10.756 [2024-07-12 11:59:00.034048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.756 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.756 [2024-07-12 11:59:00.097717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.756 [2024-07-12 11:59:00.203208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.756 [2024-07-12 11:59:00.203261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.756 [2024-07-12 11:59:00.203276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.756 [2024-07-12 11:59:00.203287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.756 [2024-07-12 11:59:00.203297] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.756 [2024-07-12 11:59:00.203349] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 [2024-07-12 11:59:00.348388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 [2024-07-12 11:59:00.356560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 null0 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 null1 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=994309 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 994309 /tmp/host.sock 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 994309 ']' 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:11.014 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:11.014 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 [2024-07-12 11:59:00.428526] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:22:11.014 [2024-07-12 11:59:00.428603] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994309 ] 00:22:11.014 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.014 [2024-07-12 11:59:00.489743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.272 [2024-07-12 11:59:00.599361] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.272 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.530 11:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 [2024-07-12 11:59:01.006310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.530 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:22:11.788 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:22:11.789 11:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:22:12.354 [2024-07-12 11:59:01.776014] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.354 [2024-07-12 11:59:01.776049] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.354 [2024-07-12 11:59:01.776082] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.612 [2024-07-12 11:59:01.862371] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:12.612 [2024-07-12 11:59:02.048500] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.612 [2024-07-12 11:59:02.048529] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:12.871 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 [2024-07-12 11:59:02.438701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.130 [2024-07-12 11:59:02.439448] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:13.130 [2024-07-12 11:59:02.439484] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:13.130 11:59:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:22:13.130 [2024-07-12 11:59:02.565333] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:13.389 [2024-07-12 11:59:02.628843] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:13.389 [2024-07-12 11:59:02.628885] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:13.389 [2024-07-12 11:59:02.628898] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.333 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 [2024-07-12 11:59:03.659285] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:14.333 [2024-07-12 11:59:03.659326] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:14.334 [2024-07-12 11:59:03.659520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.334 [2024-07-12 11:59:03.659554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.334 [2024-07-12 11:59:03.659574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.334 [2024-07-12 11:59:03.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.334 [2024-07-12 11:59:03.659606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.334 [2024-07-12 11:59:03.659622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.334 [2024-07-12 11:59:03.659638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.334 [2024-07-12 11:59:03.659652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.334 [2024-07-12 11:59:03.659668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.334 [2024-07-12 11:59:03.669519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.334 [2024-07-12 11:59:03.679569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.334 [2024-07-12 11:59:03.679759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.334 [2024-07-12 11:59:03.679804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.334 [2024-07-12 11:59:03.679821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 [2024-07-12 11:59:03.679860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 [2024-07-12 11:59:03.679931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.334 [2024-07-12 11:59:03.679950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.334 [2024-07-12 11:59:03.679965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.334 [2024-07-12 11:59:03.679986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.334 [2024-07-12 11:59:03.689652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.334 [2024-07-12 11:59:03.689831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.334 [2024-07-12 11:59:03.689861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.334 [2024-07-12 11:59:03.689889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 [2024-07-12 11:59:03.689928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 [2024-07-12 11:59:03.689948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.334 [2024-07-12 11:59:03.689962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.334 [2024-07-12 11:59:03.689975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.334 [2024-07-12 11:59:03.689994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.334 [2024-07-12 11:59:03.699728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.334 [2024-07-12 11:59:03.699885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.334 [2024-07-12 11:59:03.699929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.334 [2024-07-12 11:59:03.699946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 [2024-07-12 11:59:03.699968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 [2024-07-12 11:59:03.700001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.334 [2024-07-12 11:59:03.700018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.334 [2024-07-12 11:59:03.700032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.334 [2024-07-12 11:59:03.700050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:22:14.334 [2024-07-12 11:59:03.709803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.334 [2024-07-12 11:59:03.709976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.334 [2024-07-12 11:59:03.710007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.334 [2024-07-12 11:59:03.710030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 [2024-07-12 11:59:03.710054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 [2024-07-12 11:59:03.710074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.334 [2024-07-12 11:59:03.710088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.334 [2024-07-12 11:59:03.710101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.334 [2024-07-12 11:59:03.710130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.334 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.334 [2024-07-12 11:59:03.719888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.334 [2024-07-12 11:59:03.720055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.334 [2024-07-12 11:59:03.720083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.334 [2024-07-12 11:59:03.720099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.334 [2024-07-12 11:59:03.720121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.334 [2024-07-12 11:59:03.720154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.335 [2024-07-12 11:59:03.720171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.335 [2024-07-12 11:59:03.720185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.335 [2024-07-12 11:59:03.720219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.335 [2024-07-12 11:59:03.729981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.335 [2024-07-12 11:59:03.730120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.335 [2024-07-12 11:59:03.730147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.335 [2024-07-12 11:59:03.730163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.335 [2024-07-12 11:59:03.730185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.335 [2024-07-12 11:59:03.730205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.335 [2024-07-12 11:59:03.730219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.335 [2024-07-12 11:59:03.730232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.335 [2024-07-12 11:59:03.730251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.335 [2024-07-12 11:59:03.740052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.335 [2024-07-12 11:59:03.740246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.335 [2024-07-12 11:59:03.740273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f860 with addr=10.0.0.2, port=4420 00:22:14.335 [2024-07-12 11:59:03.740289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f860 is same with the state(5) to be set 00:22:14.335 [2024-07-12 11:59:03.740311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f860 (9): Bad file descriptor 00:22:14.335 [2024-07-12 11:59:03.740344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:14.335 [2024-07-12 11:59:03.740361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:14.335 [2024-07-12 11:59:03.740375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:14.335 [2024-07-12 11:59:03.740395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.335 [2024-07-12 11:59:03.745989] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:14.335 [2024-07-12 11:59:03.746019] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.335 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.594 11:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.530 [2024-07-12 11:59:04.983753] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:15.530 [2024-07-12 11:59:04.983795] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:15.530 [2024-07-12 11:59:04.983821] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.788 [2024-07-12 11:59:05.071079] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:15.788 [2024-07-12 11:59:05.178490] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:15.788 [2024-07-12 11:59:05.178543] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.788 request: 00:22:15.788 { 00:22:15.788 "name": "nvme", 00:22:15.788 "trtype": "tcp", 00:22:15.788 "traddr": "10.0.0.2", 00:22:15.788 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:15.788 "adrfam": "ipv4", 00:22:15.788 "trsvcid": "8009", 00:22:15.788 "wait_for_attach": true, 00:22:15.788 "method": "bdev_nvme_start_discovery", 00:22:15.788 "req_id": 1 00:22:15.788 } 00:22:15.788 Got JSON-RPC error response 00:22:15.788 response: 00:22:15.788 { 00:22:15.788 "code": -17, 00:22:15.788 "message": "File exists" 00:22:15.788 } 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:15.788 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.048 request: 00:22:16.048 { 00:22:16.048 "name": "nvme_second", 00:22:16.048 "trtype": "tcp", 00:22:16.048 "traddr": "10.0.0.2", 00:22:16.048 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:16.048 "adrfam": "ipv4", 00:22:16.048 "trsvcid": "8009", 00:22:16.048 "wait_for_attach": true, 00:22:16.048 "method": "bdev_nvme_start_discovery", 00:22:16.048 "req_id": 1 00:22:16.048 } 00:22:16.048 Got JSON-RPC error response 00:22:16.048 response: 00:22:16.048 { 00:22:16.048 "code": -17, 00:22:16.048 "message": "File exists" 00:22:16.048 } 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:16.048 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.049 11:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.982 [2024-07-12 11:59:06.394322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.982 [2024-07-12 11:59:06.394379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe505a0 with addr=10.0.0.2, port=8010 00:22:16.982 [2024-07-12 11:59:06.394404] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:16.982 [2024-07-12 11:59:06.394420] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:16.982 [2024-07-12 11:59:06.394442] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:17.948 [2024-07-12 11:59:07.396591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.948 [2024-07-12 11:59:07.396630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe505a0 with addr=10.0.0.2, port=8010 00:22:17.948 [2024-07-12 11:59:07.396654] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:17.948 [2024-07-12 11:59:07.396668] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:17.948 [2024-07-12 11:59:07.396682] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:19.328 [2024-07-12 11:59:08.398931] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:19.328 request: 00:22:19.328 { 00:22:19.328 "name": "nvme_second", 00:22:19.328 "trtype": "tcp", 00:22:19.328 "traddr": "10.0.0.2", 00:22:19.328 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:19.328 "adrfam": "ipv4", 00:22:19.328 "trsvcid": "8010", 00:22:19.328 "attach_timeout_ms": 3000, 00:22:19.328 "method": "bdev_nvme_start_discovery", 00:22:19.328 "req_id": 1 00:22:19.328 } 00:22:19.328 Got JSON-RPC error response 00:22:19.328 response: 00:22:19.328 { 00:22:19.328 "code": -110, 00:22:19.328 "message": "Connection timed out" 00:22:19.328 } 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 994309 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.328 rmmod nvme_tcp 00:22:19.328 rmmod nvme_fabrics 00:22:19.328 rmmod nvme_keyring 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 994285 ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 994285 ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 994285' 00:22:19.328 killing process with pid 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 994285 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.328 11:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.862 00:22:21.862 real 0m12.964s 00:22:21.862 user 0m18.778s 00:22:21.862 sys 0m2.701s 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.862 ************************************ 00:22:21.862 END TEST nvmf_host_discovery 00:22:21.862 ************************************ 00:22:21.862 11:59:10 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:21.862 11:59:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:21.862 11:59:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:21.862 11:59:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:21.862 ************************************ 00:22:21.862 START TEST nvmf_host_multipath_status 00:22:21.862 ************************************ 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:21.862 * Looking for test storage... 00:22:21.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.862 11:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:23.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:23.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:23.768 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:23.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:23.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.769 11:59:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:22:23.769 00:22:23.769 --- 10.0.0.2 ping statistics --- 00:22:23.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.769 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:23.769 00:22:23.769 --- 10.0.0.1 ping statistics --- 00:22:23.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.769 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=997345 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 997345 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 997345 ']' 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:23.769 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:23.769 [2024-07-12 11:59:13.151659] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:22:23.769 [2024-07-12 11:59:13.151732] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.769 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.769 [2024-07-12 11:59:13.217590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:24.027 [2024-07-12 11:59:13.327485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.027 [2024-07-12 11:59:13.327552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.027 [2024-07-12 11:59:13.327565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.027 [2024-07-12 11:59:13.327575] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.027 [2024-07-12 11:59:13.327600] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.027 [2024-07-12 11:59:13.327702] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.027 [2024-07-12 11:59:13.327707] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=997345 00:22:24.027 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.284 [2024-07-12 11:59:13.675538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.284 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:24.541 Malloc0 00:22:24.541 11:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:24.798 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.057 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.314 [2024-07-12 11:59:14.700100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.314 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:25.571 [2024-07-12 11:59:14.944831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=997626 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 997626 /var/tmp/bdevperf.sock 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 997626 ']' 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:25.571 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:25.572 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.572 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:25.572 11:59:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:25.829 11:59:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:25.829 11:59:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:22:25.829 11:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:26.087 11:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:26.655 Nvme0n1 00:22:26.655 11:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:26.912 Nvme0n1 00:22:26.912 11:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:26.912 11:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:29.442 11:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:29.442 11:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:29.442 11:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:29.442 11:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:30.820 11:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:30.820 11:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.820 11:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.820 11:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.820 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.820 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:30.820 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.820 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.077 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.077 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.077 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.077 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.335 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.335 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.335 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.335 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.592 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.592 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.592 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.592 11:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.850 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.850 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.850 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.850 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.108 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.108 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:32.108 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.365 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:32.624 11:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:33.563 11:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:33.563 11:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:33.563 11:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.563 11:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.820 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.820 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.820 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.820 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.078 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.078 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.078 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.078 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.337 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.337 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.337 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.337 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.594 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.594 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.594 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.594 11:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.852 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.852 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.852 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.852 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.109 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.109 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:35.109 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:35.368 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:35.627 11:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:36.563 11:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:36.563 11:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.563 11:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.563 11:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.822 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.822 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:36.822 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.822 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.081 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.081 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.081 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.081 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.339 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.339 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.339 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.339 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.596 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.596 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.597 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.597 11:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:37.887 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.887 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:37.887 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.888 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.145 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.145 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:38.145 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:38.403 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:38.662 11:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:39.596 11:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:39.596 11:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:39.596 11:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.596 11:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:39.854 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.854 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:39.854 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.854 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.111 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.111 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.111 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.111 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:40.369 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.369 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:40.369 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.369 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:40.627 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.627 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:40.627 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.627 11:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:40.884 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.884 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:40.884 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.885 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.143 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:41.143 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:41.143 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:41.400 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:41.659 11:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:42.595 11:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:42.595 11:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:42.595 11:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.595 11:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.852 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.852 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:42.852 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.852 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.110 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.110 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.110 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.110 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.368 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.368 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.368 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.368 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.626 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.626 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:43.626 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.626 11:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.884 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.884 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:43.884 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.884 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.141 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.141 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:44.141 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:44.398 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:44.658 11:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:45.597 11:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:45.597 11:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:45.597 11:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.597 11:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.855 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.855 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:45.855 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.855 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.112 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.112 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.112 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.112 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.370 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.370 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.370 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.370 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.628 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.628 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:46.628 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.628 11:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.886 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.886 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:46.886 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.886 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.143 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.143 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:47.401 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:47.401 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:47.659 11:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:47.917 11:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:48.851 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:48.851 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:48.851 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.851 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:49.107 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.107 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:49.107 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.108 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:49.365 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.365 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:49.365 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.365 11:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.621 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.621 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.621 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.621 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.878 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.878 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:49.878 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.878 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:50.135 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.135 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:50.135 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.135 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:50.392 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.392 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:50.392 11:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:50.651 11:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:50.909 11:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:51.845 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:51.845 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:51.845 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.845 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:52.102 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.102 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:52.102 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.102 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:52.359 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.360 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:52.360 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.360 11:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.617 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.617 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.617 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.617 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.874 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.874 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.874 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.874 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:53.133 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.133 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:53.133 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.133 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:53.389 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.389 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:53.389 11:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:53.646 11:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:53.903 11:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:54.878 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:54.878 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.878 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.878 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:55.135 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.135 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:55.135 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.135 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.393 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.393 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.393 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.393 11:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.651 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.651 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.651 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.651 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.907 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.907 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.907 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.907 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:56.164 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.164 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:56.164 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.164 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.420 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.420 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:56.420 11:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:56.677 11:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:56.935 11:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.307 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.565 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.565 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.565 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.565 11:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.823 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.823 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.823 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.823 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:59.081 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.081 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:59.081 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.081 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:59.338 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.338 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:59.338 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.339 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 997626 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 997626 ']' 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 997626 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 997626 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 997626' 00:22:59.596 killing process with pid 997626 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 997626 00:22:59.596 11:59:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 997626 00:22:59.596 Connection closed with partial response: 00:22:59.596 00:22:59.596 00:22:59.879 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 997626 00:22:59.879 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.879 [2024-07-12 11:59:15.017382] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:22:59.879 [2024-07-12 11:59:15.017462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997626 ] 00:22:59.879 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.879 [2024-07-12 11:59:15.077579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.879 [2024-07-12 11:59:15.188115] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.879 Running I/O for 90 seconds... 00:22:59.879 [2024-07-12 11:59:30.698362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.879 [2024-07-12 11:59:30.698662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.879 [2024-07-12 11:59:30.698684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.698715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.698738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.698754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.699972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.699989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.880 [2024-07-12 11:59:30.700771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.880 [2024-07-12 11:59:30.700790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.701951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.701973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.701989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.881 [2024-07-12 11:59:30.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.702979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.702996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.703018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.703035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.703057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.703073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.703095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.703110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.703132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.703147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.881 [2024-07-12 11:59:30.703169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.881 [2024-07-12 11:59:30.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.703965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.703987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.882 [2024-07-12 11:59:30.704040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.882 [2024-07-12 11:59:30.704652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.882 [2024-07-12 11:59:30.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.882 [2024-07-12 11:59:30.704724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.882 [2024-07-12 11:59:30.704778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.882 [2024-07-12 11:59:30.704799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.882 [2024-07-12 11:59:30.704815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.704836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.704852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.704886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.704904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.704926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.704943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.705960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.705978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.883 [2024-07-12 11:59:30.706712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.883 [2024-07-12 11:59:30.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.706963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.706979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.884 [2024-07-12 11:59:30.707312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.884 [2024-07-12 11:59:30.707333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.707698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.707973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.707989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.708973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.708989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.885 [2024-07-12 11:59:30.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.885 [2024-07-12 11:59:30.709701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.885 [2024-07-12 11:59:30.709722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.709966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.709982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.710368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.710969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.710985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.886 [2024-07-12 11:59:30.711026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.886 [2024-07-12 11:59:30.711949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.886 [2024-07-12 11:59:30.711966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.711987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.712964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.712985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.887 [2024-07-12 11:59:30.713591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.887 [2024-07-12 11:59:30.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.713974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.713991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.714033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.714072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.714110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.714406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.714422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.888 [2024-07-12 11:59:30.715432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.888 [2024-07-12 11:59:30.715527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.888 [2024-07-12 11:59:30.715542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.715978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.715994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.889 [2024-07-12 11:59:30.716722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.889 [2024-07-12 11:59:30.716901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.889 [2024-07-12 11:59:30.716918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.716939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.716955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.716976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.716991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.890 [2024-07-12 11:59:30.717343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.717379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.717414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.717450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.717486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.717507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.717523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.718985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.719008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.719024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.719045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.719083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.719098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.890 [2024-07-12 11:59:30.719120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.890 [2024-07-12 11:59:30.719140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.719970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.719986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.891 [2024-07-12 11:59:30.720431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.891 [2024-07-12 11:59:30.720467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.891 [2024-07-12 11:59:30.720488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.720524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.720559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.720599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.720635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.720671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.720686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.892 [2024-07-12 11:59:30.721776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.721967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.721983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.892 [2024-07-12 11:59:30.722762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.892 [2024-07-12 11:59:30.722778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.722798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.722812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.722833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.722862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.722894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.722911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.722933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.722949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.722971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.722986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.893 [2024-07-12 11:59:30.723061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.893 [2024-07-12 11:59:30.723693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.893 [2024-07-12 11:59:30.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.893 [2024-07-12 11:59:30.723771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.893 [2024-07-12 11:59:30.723808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.723829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.893 [2024-07-12 11:59:30.723844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.893 [2024-07-12 11:59:30.724477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.724969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.724985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.894 [2024-07-12 11:59:30.725966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.894 [2024-07-12 11:59:30.725982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.726742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.726777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.726814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.726887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.726928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.726973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.726995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.727967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.727984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.895 [2024-07-12 11:59:30.728134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.895 [2024-07-12 11:59:30.728266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.895 [2024-07-12 11:59:30.728302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.728977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.728993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.896 [2024-07-12 11:59:30.729446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.729617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.896 [2024-07-12 11:59:30.729633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.896 [2024-07-12 11:59:30.735947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.735979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.897 [2024-07-12 11:59:30.736368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.736439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.736459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.736474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.737968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.737984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.897 [2024-07-12 11:59:30.738256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.897 [2024-07-12 11:59:30.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.738970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.738991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.898 [2024-07-12 11:59:30.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.898 [2024-07-12 11:59:30.739537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.898 [2024-07-12 11:59:30.739557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.739572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.739592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.739607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.739627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.739662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.739677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.899 [2024-07-12 11:59:30.740802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.740973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.740995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.899 [2024-07-12 11:59:30.741836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.899 [2024-07-12 11:59:30.741874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.741899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.741915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.741936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.741951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.741972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.741987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.900 [2024-07-12 11:59:30.742713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.742772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.742788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.743975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.743997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.744013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.744034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.744050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.744071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.744086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.900 [2024-07-12 11:59:30.744108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.900 [2024-07-12 11:59:30.744124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.744966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.744987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.901 [2024-07-12 11:59:30.745687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.901 [2024-07-12 11:59:30.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.745721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.902 [2024-07-12 11:59:30.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.745762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.745777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.745798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.745812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.745832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.745895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.745913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.746965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.746981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.902 [2024-07-12 11:59:30.747095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.902 [2024-07-12 11:59:30.747749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.902 [2024-07-12 11:59:30.747764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.747834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.747897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.747935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.747971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.747992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.748385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.748954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.748993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.749055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.903 [2024-07-12 11:59:30.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.903 [2024-07-12 11:59:30.749849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.903 [2024-07-12 11:59:30.749883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.749917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.749944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.749960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.749986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.750967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.750983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.904 [2024-07-12 11:59:30.751647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.904 [2024-07-12 11:59:30.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.751961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.751986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.752002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.752043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.752084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:30.752125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:30.752182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:30.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:30.752264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:30.752423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:30.752443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.342703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.342774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.342824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.342862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.342913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.342934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.342950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.905 [2024-07-12 11:59:46.344622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.905 [2024-07-12 11:59:46.344916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.905 [2024-07-12 11:59:46.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.344957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.344979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.344995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.345411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.345444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.346849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.346878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.906 [2024-07-12 11:59:46.346897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.347252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.347280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.347298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.347320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.347337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.347358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.347374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.906 [2024-07-12 11:59:46.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.906 [2024-07-12 11:59:46.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.347793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.347834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.347882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.347922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.347961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.347983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.348785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.348980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.348996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.907 [2024-07-12 11:59:46.349613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.907 [2024-07-12 11:59:46.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.907 [2024-07-12 11:59:46.349772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.349799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.349815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.349837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.349854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.349883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.349908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.349930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.349945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.349967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.349984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.350005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.350021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.350042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.350097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.352403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.352440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.352513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.352623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.352716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.352737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.355063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.355110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.908 [2024-07-12 11:59:46.355149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.908 [2024-07-12 11:59:46.355846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.908 [2024-07-12 11:59:46.355862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.355894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.355911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.355933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.355949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.355971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.355987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.356513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.356529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.358900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.358925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.358958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.358977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.359367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.359404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.359442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.909 [2024-07-12 11:59:46.359861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.909 [2024-07-12 11:59:46.359897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.909 [2024-07-12 11:59:46.359922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.359944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.359961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.359983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.360000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.360022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.360038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.360060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.360077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.360099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.360115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.360136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.360152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.360175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.360192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.361453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.361814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.361853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.361926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.361948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.363689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.363726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.910 [2024-07-12 11:59:46.363885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.363926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.363964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.363986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.364006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.910 [2024-07-12 11:59:46.364029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.910 [2024-07-12 11:59:46.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.364948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.364992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.365015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.365032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.365054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.365070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.365093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.365109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.365131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.365162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.365185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.365201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.367941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.367968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.368061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.911 [2024-07-12 11:59:46.368139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.911 [2024-07-12 11:59:46.368459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.911 [2024-07-12 11:59:46.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.368657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.368713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.368751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.368823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.368886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.368966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.368988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.369043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.369117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.369274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.369312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.369425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.369991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.912 [2024-07-12 11:59:46.370618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.912 [2024-07-12 11:59:46.370639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.912 [2024-07-12 11:59:46.370658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.370701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.370718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.370758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.370775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.371957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.371980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.371996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.372017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.372033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.372055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.372071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.372093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.372108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.372130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.372150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.372173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.372190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.373644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.373690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.373729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.373781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.373834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.373878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.373935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.373973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.373995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.374050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.374087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.374132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.374170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.913 [2024-07-12 11:59:46.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.913 [2024-07-12 11:59:46.374280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.913 [2024-07-12 11:59:46.374295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.374331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.374366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.374401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.374436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.374506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.374541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.374602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.376977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.376993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.377031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.377068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.377105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.377143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.377184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.377239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.377291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.914 [2024-07-12 11:59:46.377327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.914 [2024-07-12 11:59:46.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.914 [2024-07-12 11:59:46.377363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.378304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.378347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.378383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.378419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.378455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.378692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.378708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.379778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.379815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.379878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.379927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.379965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.379987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.380002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.380040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.380078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.380115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.380153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.380190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.380228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.915 [2024-07-12 11:59:46.381725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.915 [2024-07-12 11:59:46.381784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.915 [2024-07-12 11:59:46.381800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.381822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.381838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.382749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.382805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.382937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.382997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.383013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.383035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.383051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.383072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.383092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.383115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.383132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.383154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.383170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.384982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.384998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.385021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.385036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.385100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.385117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.916 [2024-07-12 11:59:46.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.916 [2024-07-12 11:59:46.385193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.916 [2024-07-12 11:59:46.385208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.385245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.385260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.385280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.385295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.385316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.385330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.385350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.385366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.385387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.385401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.387973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.388065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.388146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.388183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.388296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.388355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.388371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.389326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.389368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.389405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.389440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.389499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.389541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.389579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.389615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.389636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.389652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.390410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.390436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.390467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.390487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.390509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.917 [2024-07-12 11:59:46.390526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.917 [2024-07-12 11:59:46.390547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.917 [2024-07-12 11:59:46.390564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.390714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.390758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.390811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.390862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.390972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.390988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.391025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.391062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.391099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.391137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.391175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.391212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.391254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.391277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.391293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.392182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.392408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.392444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.392480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.392521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.392596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.392617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.393205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.393261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.393475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.918 [2024-07-12 11:59:46.393549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.918 [2024-07-12 11:59:46.393570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.918 [2024-07-12 11:59:46.393586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.393623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.393659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.393702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.393755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.393793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.393830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.393852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.395837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.395963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.395979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.396000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.396016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.396037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.396053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.396074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.396090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.396112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.396127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.398019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.398045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.398088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.919 [2024-07-12 11:59:46.398113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.398138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.398155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.398176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.919 [2024-07-12 11:59:46.398193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.919 [2024-07-12 11:59:46.398215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.398967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.398983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.399005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.399021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.399957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.399983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.400027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.400066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.400141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.920 [2024-07-12 11:59:46.400185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.400223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.400260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.400298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.920 [2024-07-12 11:59:46.400319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.920 [2024-07-12 11:59:46.400335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.920 Received shutdown signal, test time was about 32.361838 seconds 00:22:59.920 00:22:59.921 Latency(us) 00:22:59.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.921 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.921 Verification LBA range: start 0x0 length 0x4000 00:22:59.921 Nvme0n1 : 32.36 8090.53 31.60 0.00 0.00 15795.59 190.39 4076242.11 00:22:59.921 =================================================================================================================== 00:22:59.921 Total : 8090.53 31.60 0.00 0.00 15795.59 190.39 4076242.11 00:22:59.921 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.181 rmmod nvme_tcp 00:23:00.181 rmmod nvme_fabrics 00:23:00.181 rmmod nvme_keyring 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 997345 ']' 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 997345 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 997345 ']' 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 997345 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 997345 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 997345' 00:23:00.181 killing process with pid 997345 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 997345 00:23:00.181 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 997345 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.441 11:59:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.977 11:59:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.977 00:23:02.977 real 0m40.965s 00:23:02.977 user 2m3.426s 00:23:02.977 sys 0m10.466s 00:23:02.977 11:59:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:02.977 11:59:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:02.977 ************************************ 00:23:02.977 END TEST nvmf_host_multipath_status 00:23:02.977 ************************************ 00:23:02.977 11:59:51 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:02.977 11:59:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:02.977 11:59:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:02.977 11:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.977 ************************************ 00:23:02.977 START TEST nvmf_discovery_remove_ifc 00:23:02.977 ************************************ 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:02.977 * Looking for test storage... 00:23:02.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.977 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.978 11:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.882 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:04.882 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:04.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:04.883 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:04.883 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.883 11:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:04.883 00:23:04.883 --- 10.0.0.2 ping statistics --- 00:23:04.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.883 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:23:04.883 00:23:04.883 --- 10.0.0.1 ping statistics --- 00:23:04.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.883 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1003812 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1003812 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1003812 ']' 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:04.883 11:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.883 [2024-07-12 11:59:54.088732] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:23:04.883 [2024-07-12 11:59:54.088816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.883 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.883 [2024-07-12 11:59:54.157069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.883 [2024-07-12 11:59:54.272433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.883 [2024-07-12 11:59:54.272488] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.883 [2024-07-12 11:59:54.272505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.883 [2024-07-12 11:59:54.272525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.883 [2024-07-12 11:59:54.272537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.883 [2024-07-12 11:59:54.272568] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.816 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.816 [2024-07-12 11:59:55.069455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.816 [2024-07-12 11:59:55.077594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:05.816 null0 00:23:05.817 [2024-07-12 11:59:55.109589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1003964 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1003964 /tmp/host.sock 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1003964 ']' 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:05.817 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:05.817 11:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.817 [2024-07-12 11:59:55.178363] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:23:05.817 [2024-07-12 11:59:55.178440] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003964 ] 00:23:05.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.817 [2024-07-12 11:59:55.244701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.074 [2024-07-12 11:59:55.360638] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.643 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.900 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.900 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:06.900 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.900 11:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.836 [2024-07-12 11:59:57.280566] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:07.836 [2024-07-12 11:59:57.280595] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:07.836 [2024-07-12 11:59:57.280620] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.096 [2024-07-12 11:59:57.408081] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:08.355 [2024-07-12 11:59:57.590317] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:08.355 [2024-07-12 11:59:57.590389] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:08.355 [2024-07-12 11:59:57.590433] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:08.355 [2024-07-12 11:59:57.590462] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:08.355 [2024-07-12 11:59:57.590489] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:08.355 [2024-07-12 11:59:57.638599] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21ca020 was disconnected and freed. delete nvme_qpair. 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:08.355 11:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:09.294 11:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:10.673 11:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:11.613 12:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:12.576 12:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.509 12:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.768 [2024-07-12 12:00:03.031077] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:13.768 [2024-07-12 12:00:03.031154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.768 [2024-07-12 12:00:03.031175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.768 [2024-07-12 12:00:03.031207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.768 [2024-07-12 12:00:03.031220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.768 [2024-07-12 12:00:03.031233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.768 [2024-07-12 12:00:03.031246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.768 [2024-07-12 12:00:03.031274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.768 [2024-07-12 12:00:03.031289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.768 [2024-07-12 12:00:03.031305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.768 [2024-07-12 12:00:03.031320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.768 [2024-07-12 12:00:03.031334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21913e0 is same with the state(5) to be set 00:23:13.768 [2024-07-12 12:00:03.041097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21913e0 (9): Bad file descriptor 00:23:13.768 [2024-07-12 12:00:03.051145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.711 12:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.711 [2024-07-12 12:00:04.061933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:14.711 [2024-07-12 12:00:04.062002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21913e0 with addr=10.0.0.2, port=4420 00:23:14.711 [2024-07-12 12:00:04.062030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21913e0 is same with the state(5) to be set 00:23:14.711 [2024-07-12 12:00:04.062080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21913e0 (9): Bad file descriptor 00:23:14.711 [2024-07-12 12:00:04.062548] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:14.711 [2024-07-12 12:00:04.062588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.711 [2024-07-12 12:00:04.062606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.711 [2024-07-12 12:00:04.062623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.711 [2024-07-12 12:00:04.062663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.711 [2024-07-12 12:00:04.062683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.711 12:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.711 12:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:14.711 12:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:15.648 [2024-07-12 12:00:05.065186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:15.648 [2024-07-12 12:00:05.065254] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:15.648 [2024-07-12 12:00:05.065295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.648 [2024-07-12 12:00:05.065318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.648 [2024-07-12 12:00:05.065337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.648 [2024-07-12 12:00:05.065352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.648 [2024-07-12 12:00:05.065369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.648 [2024-07-12 12:00:05.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.648 [2024-07-12 12:00:05.065399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.648 [2024-07-12 12:00:05.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.648 [2024-07-12 12:00:05.065429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.648 [2024-07-12 12:00:05.065444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.648 [2024-07-12 12:00:05.065466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:15.648 [2024-07-12 12:00:05.065734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190870 (9): Bad file descriptor 00:23:15.648 [2024-07-12 12:00:05.066752] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:15.648 [2024-07-12 12:00:05.066778] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.648 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:15.905 12:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:16.846 12:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.789 [2024-07-12 12:00:07.125008] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.789 [2024-07-12 12:00:07.125032] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.789 [2024-07-12 12:00:07.125059] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.789 [2024-07-12 12:00:07.211341] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:17.789 12:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.046 [2024-07-12 12:00:07.427766] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:18.046 [2024-07-12 12:00:07.427826] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:18.046 [2024-07-12 12:00:07.427884] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:18.046 [2024-07-12 12:00:07.427928] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:18.046 [2024-07-12 12:00:07.427944] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.046 [2024-07-12 12:00:07.433974] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21ae3e0 was disconnected and freed. delete nvme_qpair. 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1003964 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1003964 ']' 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1003964 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1003964 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1003964' 00:23:18.981 killing process with pid 1003964 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1003964 00:23:18.981 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1003964 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.238 rmmod nvme_tcp 00:23:19.238 rmmod nvme_fabrics 00:23:19.238 rmmod nvme_keyring 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1003812 ']' 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1003812 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1003812 ']' 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1003812 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1003812 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1003812' 00:23:19.238 killing process with pid 1003812 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1003812 00:23:19.238 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1003812 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.496 12:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.024 12:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.024 00:23:22.024 real 0m19.063s 00:23:22.024 user 0m28.213s 00:23:22.024 sys 0m3.072s 00:23:22.024 12:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:22.024 12:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.024 ************************************ 00:23:22.024 END TEST nvmf_discovery_remove_ifc 00:23:22.024 ************************************ 00:23:22.024 12:00:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:22.024 12:00:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:22.024 12:00:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:22.024 12:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.024 ************************************ 00:23:22.024 START TEST nvmf_identify_kernel_target 00:23:22.024 ************************************ 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:22.024 * Looking for test storage... 00:23:22.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.024 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.025 12:00:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.929 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:23.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:23.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:23.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:23.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:23:23.930 00:23:23.930 --- 10.0.0.2 ping statistics --- 00:23:23.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.930 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:23.930 00:23:23.930 --- 10.0.0.1 ping statistics --- 00:23:23.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.930 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:23.930 12:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:24.878 Waiting for block devices as requested 00:23:24.878 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:25.137 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:25.137 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:25.137 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:25.396 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:25.396 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:25.396 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:25.396 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:25.654 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:25.654 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:25.654 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:25.654 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:25.654 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:25.912 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:25.912 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:25.912 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:25.912 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:26.174 No valid GPT data, bailing 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:26.174 00:23:26.174 Discovery Log Number of Records 2, Generation counter 2 00:23:26.174 =====Discovery Log Entry 0====== 00:23:26.174 trtype: tcp 00:23:26.174 adrfam: ipv4 00:23:26.174 subtype: current discovery subsystem 00:23:26.174 treq: not specified, sq flow control disable supported 00:23:26.174 portid: 1 00:23:26.174 trsvcid: 4420 00:23:26.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:26.174 traddr: 10.0.0.1 00:23:26.174 eflags: none 00:23:26.174 sectype: none 00:23:26.174 =====Discovery Log Entry 1====== 00:23:26.174 trtype: tcp 00:23:26.174 adrfam: ipv4 00:23:26.174 subtype: nvme subsystem 00:23:26.174 treq: not specified, sq flow control disable supported 00:23:26.174 portid: 1 00:23:26.174 trsvcid: 4420 00:23:26.174 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:26.174 traddr: 10.0.0.1 00:23:26.174 eflags: none 00:23:26.174 sectype: none 00:23:26.174 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:26.174 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:26.174 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.432 ===================================================== 00:23:26.432 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:26.432 ===================================================== 00:23:26.432 Controller Capabilities/Features 00:23:26.432 ================================ 00:23:26.432 Vendor ID: 0000 00:23:26.432 Subsystem Vendor ID: 0000 00:23:26.432 Serial Number: 6470746286aa7877aeba 00:23:26.432 Model Number: Linux 00:23:26.432 Firmware Version: 6.7.0-68 00:23:26.432 Recommended Arb Burst: 0 00:23:26.432 IEEE OUI Identifier: 00 00 00 00:23:26.432 Multi-path I/O 00:23:26.432 May have multiple subsystem ports: No 00:23:26.432 May have multiple controllers: No 00:23:26.432 Associated with SR-IOV VF: No 00:23:26.432 Max Data Transfer Size: Unlimited 00:23:26.432 Max Number of Namespaces: 0 00:23:26.432 Max Number of I/O Queues: 1024 00:23:26.432 NVMe Specification Version (VS): 1.3 00:23:26.432 NVMe Specification Version (Identify): 1.3 00:23:26.432 Maximum Queue Entries: 1024 00:23:26.432 Contiguous Queues Required: No 00:23:26.433 Arbitration Mechanisms Supported 00:23:26.433 Weighted Round Robin: Not Supported 00:23:26.433 Vendor Specific: Not Supported 00:23:26.433 Reset Timeout: 7500 ms 00:23:26.433 Doorbell Stride: 4 bytes 00:23:26.433 NVM Subsystem Reset: Not Supported 00:23:26.433 Command Sets Supported 00:23:26.433 NVM Command Set: Supported 00:23:26.433 Boot Partition: Not Supported 00:23:26.433 Memory Page Size Minimum: 4096 bytes 00:23:26.433 Memory Page Size Maximum: 4096 bytes 00:23:26.433 Persistent Memory Region: Not Supported 00:23:26.433 Optional Asynchronous Events Supported 00:23:26.433 Namespace Attribute Notices: Not Supported 00:23:26.433 Firmware Activation Notices: Not Supported 00:23:26.433 ANA Change Notices: Not Supported 00:23:26.433 PLE Aggregate Log Change Notices: Not Supported 00:23:26.433 LBA Status Info Alert Notices: Not Supported 00:23:26.433 EGE Aggregate Log Change Notices: Not Supported 00:23:26.433 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.433 Zone Descriptor Change Notices: Not Supported 00:23:26.433 Discovery Log Change Notices: Supported 00:23:26.433 Controller Attributes 00:23:26.433 128-bit Host Identifier: Not Supported 00:23:26.433 Non-Operational Permissive Mode: Not Supported 00:23:26.433 NVM Sets: Not Supported 00:23:26.433 Read Recovery Levels: Not Supported 00:23:26.433 Endurance Groups: Not Supported 00:23:26.433 Predictable Latency Mode: Not Supported 00:23:26.433 Traffic Based Keep ALive: Not Supported 00:23:26.433 Namespace Granularity: Not Supported 00:23:26.433 SQ Associations: Not Supported 00:23:26.433 UUID List: Not Supported 00:23:26.433 Multi-Domain Subsystem: Not Supported 00:23:26.433 Fixed Capacity Management: Not Supported 00:23:26.433 Variable Capacity Management: Not Supported 00:23:26.433 Delete Endurance Group: Not Supported 00:23:26.433 Delete NVM Set: Not Supported 00:23:26.433 Extended LBA Formats Supported: Not Supported 00:23:26.433 Flexible Data Placement Supported: Not Supported 00:23:26.433 00:23:26.433 Controller Memory Buffer Support 00:23:26.433 ================================ 00:23:26.433 Supported: No 00:23:26.433 00:23:26.433 Persistent Memory Region Support 00:23:26.433 ================================ 00:23:26.433 Supported: No 00:23:26.433 00:23:26.433 Admin Command Set Attributes 00:23:26.433 ============================ 00:23:26.433 Security Send/Receive: Not Supported 00:23:26.433 Format NVM: Not Supported 00:23:26.433 Firmware Activate/Download: Not Supported 00:23:26.433 Namespace Management: Not Supported 00:23:26.433 Device Self-Test: Not Supported 00:23:26.433 Directives: Not Supported 00:23:26.433 NVMe-MI: Not Supported 00:23:26.433 Virtualization Management: Not Supported 00:23:26.433 Doorbell Buffer Config: Not Supported 00:23:26.433 Get LBA Status Capability: Not Supported 00:23:26.433 Command & Feature Lockdown Capability: Not Supported 00:23:26.433 Abort Command Limit: 1 00:23:26.433 Async Event Request Limit: 1 00:23:26.433 Number of Firmware Slots: N/A 00:23:26.433 Firmware Slot 1 Read-Only: N/A 00:23:26.433 Firmware Activation Without Reset: N/A 00:23:26.433 Multiple Update Detection Support: N/A 00:23:26.433 Firmware Update Granularity: No Information Provided 00:23:26.433 Per-Namespace SMART Log: No 00:23:26.433 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.433 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:26.433 Command Effects Log Page: Not Supported 00:23:26.433 Get Log Page Extended Data: Supported 00:23:26.433 Telemetry Log Pages: Not Supported 00:23:26.433 Persistent Event Log Pages: Not Supported 00:23:26.433 Supported Log Pages Log Page: May Support 00:23:26.433 Commands Supported & Effects Log Page: Not Supported 00:23:26.433 Feature Identifiers & Effects Log Page:May Support 00:23:26.433 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.433 Data Area 4 for Telemetry Log: Not Supported 00:23:26.433 Error Log Page Entries Supported: 1 00:23:26.433 Keep Alive: Not Supported 00:23:26.433 00:23:26.433 NVM Command Set Attributes 00:23:26.433 ========================== 00:23:26.433 Submission Queue Entry Size 00:23:26.433 Max: 1 00:23:26.433 Min: 1 00:23:26.433 Completion Queue Entry Size 00:23:26.433 Max: 1 00:23:26.433 Min: 1 00:23:26.433 Number of Namespaces: 0 00:23:26.433 Compare Command: Not Supported 00:23:26.433 Write Uncorrectable Command: Not Supported 00:23:26.433 Dataset Management Command: Not Supported 00:23:26.433 Write Zeroes Command: Not Supported 00:23:26.433 Set Features Save Field: Not Supported 00:23:26.433 Reservations: Not Supported 00:23:26.433 Timestamp: Not Supported 00:23:26.433 Copy: Not Supported 00:23:26.433 Volatile Write Cache: Not Present 00:23:26.433 Atomic Write Unit (Normal): 1 00:23:26.433 Atomic Write Unit (PFail): 1 00:23:26.433 Atomic Compare & Write Unit: 1 00:23:26.433 Fused Compare & Write: Not Supported 00:23:26.433 Scatter-Gather List 00:23:26.433 SGL Command Set: Supported 00:23:26.433 SGL Keyed: Not Supported 00:23:26.433 SGL Bit Bucket Descriptor: Not Supported 00:23:26.433 SGL Metadata Pointer: Not Supported 00:23:26.433 Oversized SGL: Not Supported 00:23:26.433 SGL Metadata Address: Not Supported 00:23:26.433 SGL Offset: Supported 00:23:26.433 Transport SGL Data Block: Not Supported 00:23:26.433 Replay Protected Memory Block: Not Supported 00:23:26.433 00:23:26.433 Firmware Slot Information 00:23:26.433 ========================= 00:23:26.433 Active slot: 0 00:23:26.433 00:23:26.433 00:23:26.433 Error Log 00:23:26.433 ========= 00:23:26.433 00:23:26.433 Active Namespaces 00:23:26.433 ================= 00:23:26.433 Discovery Log Page 00:23:26.433 ================== 00:23:26.433 Generation Counter: 2 00:23:26.433 Number of Records: 2 00:23:26.433 Record Format: 0 00:23:26.433 00:23:26.433 Discovery Log Entry 0 00:23:26.433 ---------------------- 00:23:26.433 Transport Type: 3 (TCP) 00:23:26.433 Address Family: 1 (IPv4) 00:23:26.433 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:26.433 Entry Flags: 00:23:26.433 Duplicate Returned Information: 0 00:23:26.433 Explicit Persistent Connection Support for Discovery: 0 00:23:26.433 Transport Requirements: 00:23:26.433 Secure Channel: Not Specified 00:23:26.433 Port ID: 1 (0x0001) 00:23:26.433 Controller ID: 65535 (0xffff) 00:23:26.433 Admin Max SQ Size: 32 00:23:26.433 Transport Service Identifier: 4420 00:23:26.433 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:26.433 Transport Address: 10.0.0.1 00:23:26.433 Discovery Log Entry 1 00:23:26.433 ---------------------- 00:23:26.433 Transport Type: 3 (TCP) 00:23:26.433 Address Family: 1 (IPv4) 00:23:26.433 Subsystem Type: 2 (NVM Subsystem) 00:23:26.433 Entry Flags: 00:23:26.433 Duplicate Returned Information: 0 00:23:26.433 Explicit Persistent Connection Support for Discovery: 0 00:23:26.433 Transport Requirements: 00:23:26.433 Secure Channel: Not Specified 00:23:26.433 Port ID: 1 (0x0001) 00:23:26.433 Controller ID: 65535 (0xffff) 00:23:26.433 Admin Max SQ Size: 32 00:23:26.433 Transport Service Identifier: 4420 00:23:26.433 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:26.433 Transport Address: 10.0.0.1 00:23:26.433 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:26.433 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.433 get_feature(0x01) failed 00:23:26.433 get_feature(0x02) failed 00:23:26.433 get_feature(0x04) failed 00:23:26.433 ===================================================== 00:23:26.433 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:26.433 ===================================================== 00:23:26.433 Controller Capabilities/Features 00:23:26.433 ================================ 00:23:26.433 Vendor ID: 0000 00:23:26.433 Subsystem Vendor ID: 0000 00:23:26.433 Serial Number: 9155868ba2051844e948 00:23:26.433 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:26.433 Firmware Version: 6.7.0-68 00:23:26.433 Recommended Arb Burst: 6 00:23:26.433 IEEE OUI Identifier: 00 00 00 00:23:26.433 Multi-path I/O 00:23:26.433 May have multiple subsystem ports: Yes 00:23:26.433 May have multiple controllers: Yes 00:23:26.433 Associated with SR-IOV VF: No 00:23:26.433 Max Data Transfer Size: Unlimited 00:23:26.433 Max Number of Namespaces: 1024 00:23:26.433 Max Number of I/O Queues: 128 00:23:26.433 NVMe Specification Version (VS): 1.3 00:23:26.433 NVMe Specification Version (Identify): 1.3 00:23:26.433 Maximum Queue Entries: 1024 00:23:26.433 Contiguous Queues Required: No 00:23:26.433 Arbitration Mechanisms Supported 00:23:26.433 Weighted Round Robin: Not Supported 00:23:26.433 Vendor Specific: Not Supported 00:23:26.433 Reset Timeout: 7500 ms 00:23:26.433 Doorbell Stride: 4 bytes 00:23:26.433 NVM Subsystem Reset: Not Supported 00:23:26.433 Command Sets Supported 00:23:26.433 NVM Command Set: Supported 00:23:26.433 Boot Partition: Not Supported 00:23:26.433 Memory Page Size Minimum: 4096 bytes 00:23:26.433 Memory Page Size Maximum: 4096 bytes 00:23:26.433 Persistent Memory Region: Not Supported 00:23:26.433 Optional Asynchronous Events Supported 00:23:26.433 Namespace Attribute Notices: Supported 00:23:26.433 Firmware Activation Notices: Not Supported 00:23:26.434 ANA Change Notices: Supported 00:23:26.434 PLE Aggregate Log Change Notices: Not Supported 00:23:26.434 LBA Status Info Alert Notices: Not Supported 00:23:26.434 EGE Aggregate Log Change Notices: Not Supported 00:23:26.434 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.434 Zone Descriptor Change Notices: Not Supported 00:23:26.434 Discovery Log Change Notices: Not Supported 00:23:26.434 Controller Attributes 00:23:26.434 128-bit Host Identifier: Supported 00:23:26.434 Non-Operational Permissive Mode: Not Supported 00:23:26.434 NVM Sets: Not Supported 00:23:26.434 Read Recovery Levels: Not Supported 00:23:26.434 Endurance Groups: Not Supported 00:23:26.434 Predictable Latency Mode: Not Supported 00:23:26.434 Traffic Based Keep ALive: Supported 00:23:26.434 Namespace Granularity: Not Supported 00:23:26.434 SQ Associations: Not Supported 00:23:26.434 UUID List: Not Supported 00:23:26.434 Multi-Domain Subsystem: Not Supported 00:23:26.434 Fixed Capacity Management: Not Supported 00:23:26.434 Variable Capacity Management: Not Supported 00:23:26.434 Delete Endurance Group: Not Supported 00:23:26.434 Delete NVM Set: Not Supported 00:23:26.434 Extended LBA Formats Supported: Not Supported 00:23:26.434 Flexible Data Placement Supported: Not Supported 00:23:26.434 00:23:26.434 Controller Memory Buffer Support 00:23:26.434 ================================ 00:23:26.434 Supported: No 00:23:26.434 00:23:26.434 Persistent Memory Region Support 00:23:26.434 ================================ 00:23:26.434 Supported: No 00:23:26.434 00:23:26.434 Admin Command Set Attributes 00:23:26.434 ============================ 00:23:26.434 Security Send/Receive: Not Supported 00:23:26.434 Format NVM: Not Supported 00:23:26.434 Firmware Activate/Download: Not Supported 00:23:26.434 Namespace Management: Not Supported 00:23:26.434 Device Self-Test: Not Supported 00:23:26.434 Directives: Not Supported 00:23:26.434 NVMe-MI: Not Supported 00:23:26.434 Virtualization Management: Not Supported 00:23:26.434 Doorbell Buffer Config: Not Supported 00:23:26.434 Get LBA Status Capability: Not Supported 00:23:26.434 Command & Feature Lockdown Capability: Not Supported 00:23:26.434 Abort Command Limit: 4 00:23:26.434 Async Event Request Limit: 4 00:23:26.434 Number of Firmware Slots: N/A 00:23:26.434 Firmware Slot 1 Read-Only: N/A 00:23:26.434 Firmware Activation Without Reset: N/A 00:23:26.434 Multiple Update Detection Support: N/A 00:23:26.434 Firmware Update Granularity: No Information Provided 00:23:26.434 Per-Namespace SMART Log: Yes 00:23:26.434 Asymmetric Namespace Access Log Page: Supported 00:23:26.434 ANA Transition Time : 10 sec 00:23:26.434 00:23:26.434 Asymmetric Namespace Access Capabilities 00:23:26.434 ANA Optimized State : Supported 00:23:26.434 ANA Non-Optimized State : Supported 00:23:26.434 ANA Inaccessible State : Supported 00:23:26.434 ANA Persistent Loss State : Supported 00:23:26.434 ANA Change State : Supported 00:23:26.434 ANAGRPID is not changed : No 00:23:26.434 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:26.434 00:23:26.434 ANA Group Identifier Maximum : 128 00:23:26.434 Number of ANA Group Identifiers : 128 00:23:26.434 Max Number of Allowed Namespaces : 1024 00:23:26.434 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:26.434 Command Effects Log Page: Supported 00:23:26.434 Get Log Page Extended Data: Supported 00:23:26.434 Telemetry Log Pages: Not Supported 00:23:26.434 Persistent Event Log Pages: Not Supported 00:23:26.434 Supported Log Pages Log Page: May Support 00:23:26.434 Commands Supported & Effects Log Page: Not Supported 00:23:26.434 Feature Identifiers & Effects Log Page:May Support 00:23:26.434 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.434 Data Area 4 for Telemetry Log: Not Supported 00:23:26.434 Error Log Page Entries Supported: 128 00:23:26.434 Keep Alive: Supported 00:23:26.434 Keep Alive Granularity: 1000 ms 00:23:26.434 00:23:26.434 NVM Command Set Attributes 00:23:26.434 ========================== 00:23:26.434 Submission Queue Entry Size 00:23:26.434 Max: 64 00:23:26.434 Min: 64 00:23:26.434 Completion Queue Entry Size 00:23:26.434 Max: 16 00:23:26.434 Min: 16 00:23:26.434 Number of Namespaces: 1024 00:23:26.434 Compare Command: Not Supported 00:23:26.434 Write Uncorrectable Command: Not Supported 00:23:26.434 Dataset Management Command: Supported 00:23:26.434 Write Zeroes Command: Supported 00:23:26.434 Set Features Save Field: Not Supported 00:23:26.434 Reservations: Not Supported 00:23:26.434 Timestamp: Not Supported 00:23:26.434 Copy: Not Supported 00:23:26.434 Volatile Write Cache: Present 00:23:26.434 Atomic Write Unit (Normal): 1 00:23:26.434 Atomic Write Unit (PFail): 1 00:23:26.434 Atomic Compare & Write Unit: 1 00:23:26.434 Fused Compare & Write: Not Supported 00:23:26.434 Scatter-Gather List 00:23:26.434 SGL Command Set: Supported 00:23:26.434 SGL Keyed: Not Supported 00:23:26.434 SGL Bit Bucket Descriptor: Not Supported 00:23:26.434 SGL Metadata Pointer: Not Supported 00:23:26.434 Oversized SGL: Not Supported 00:23:26.434 SGL Metadata Address: Not Supported 00:23:26.434 SGL Offset: Supported 00:23:26.434 Transport SGL Data Block: Not Supported 00:23:26.434 Replay Protected Memory Block: Not Supported 00:23:26.434 00:23:26.434 Firmware Slot Information 00:23:26.434 ========================= 00:23:26.434 Active slot: 0 00:23:26.434 00:23:26.434 Asymmetric Namespace Access 00:23:26.434 =========================== 00:23:26.434 Change Count : 0 00:23:26.434 Number of ANA Group Descriptors : 1 00:23:26.434 ANA Group Descriptor : 0 00:23:26.434 ANA Group ID : 1 00:23:26.434 Number of NSID Values : 1 00:23:26.434 Change Count : 0 00:23:26.434 ANA State : 1 00:23:26.434 Namespace Identifier : 1 00:23:26.434 00:23:26.434 Commands Supported and Effects 00:23:26.434 ============================== 00:23:26.434 Admin Commands 00:23:26.434 -------------- 00:23:26.434 Get Log Page (02h): Supported 00:23:26.434 Identify (06h): Supported 00:23:26.434 Abort (08h): Supported 00:23:26.434 Set Features (09h): Supported 00:23:26.434 Get Features (0Ah): Supported 00:23:26.434 Asynchronous Event Request (0Ch): Supported 00:23:26.434 Keep Alive (18h): Supported 00:23:26.434 I/O Commands 00:23:26.434 ------------ 00:23:26.434 Flush (00h): Supported 00:23:26.434 Write (01h): Supported LBA-Change 00:23:26.434 Read (02h): Supported 00:23:26.434 Write Zeroes (08h): Supported LBA-Change 00:23:26.434 Dataset Management (09h): Supported 00:23:26.434 00:23:26.434 Error Log 00:23:26.434 ========= 00:23:26.434 Entry: 0 00:23:26.434 Error Count: 0x3 00:23:26.434 Submission Queue Id: 0x0 00:23:26.434 Command Id: 0x5 00:23:26.434 Phase Bit: 0 00:23:26.434 Status Code: 0x2 00:23:26.434 Status Code Type: 0x0 00:23:26.434 Do Not Retry: 1 00:23:26.434 Error Location: 0x28 00:23:26.434 LBA: 0x0 00:23:26.434 Namespace: 0x0 00:23:26.434 Vendor Log Page: 0x0 00:23:26.434 ----------- 00:23:26.434 Entry: 1 00:23:26.434 Error Count: 0x2 00:23:26.434 Submission Queue Id: 0x0 00:23:26.434 Command Id: 0x5 00:23:26.434 Phase Bit: 0 00:23:26.434 Status Code: 0x2 00:23:26.434 Status Code Type: 0x0 00:23:26.434 Do Not Retry: 1 00:23:26.434 Error Location: 0x28 00:23:26.434 LBA: 0x0 00:23:26.434 Namespace: 0x0 00:23:26.434 Vendor Log Page: 0x0 00:23:26.434 ----------- 00:23:26.434 Entry: 2 00:23:26.434 Error Count: 0x1 00:23:26.434 Submission Queue Id: 0x0 00:23:26.434 Command Id: 0x4 00:23:26.434 Phase Bit: 0 00:23:26.434 Status Code: 0x2 00:23:26.434 Status Code Type: 0x0 00:23:26.434 Do Not Retry: 1 00:23:26.434 Error Location: 0x28 00:23:26.434 LBA: 0x0 00:23:26.434 Namespace: 0x0 00:23:26.434 Vendor Log Page: 0x0 00:23:26.434 00:23:26.434 Number of Queues 00:23:26.434 ================ 00:23:26.434 Number of I/O Submission Queues: 128 00:23:26.434 Number of I/O Completion Queues: 128 00:23:26.434 00:23:26.434 ZNS Specific Controller Data 00:23:26.434 ============================ 00:23:26.434 Zone Append Size Limit: 0 00:23:26.434 00:23:26.434 00:23:26.434 Active Namespaces 00:23:26.434 ================= 00:23:26.434 get_feature(0x05) failed 00:23:26.434 Namespace ID:1 00:23:26.434 Command Set Identifier: NVM (00h) 00:23:26.434 Deallocate: Supported 00:23:26.434 Deallocated/Unwritten Error: Not Supported 00:23:26.434 Deallocated Read Value: Unknown 00:23:26.434 Deallocate in Write Zeroes: Not Supported 00:23:26.434 Deallocated Guard Field: 0xFFFF 00:23:26.434 Flush: Supported 00:23:26.434 Reservation: Not Supported 00:23:26.434 Namespace Sharing Capabilities: Multiple Controllers 00:23:26.434 Size (in LBAs): 1953525168 (931GiB) 00:23:26.434 Capacity (in LBAs): 1953525168 (931GiB) 00:23:26.435 Utilization (in LBAs): 1953525168 (931GiB) 00:23:26.435 UUID: e0f95261-4eb0-419e-a176-f4aca1277691 00:23:26.435 Thin Provisioning: Not Supported 00:23:26.435 Per-NS Atomic Units: Yes 00:23:26.435 Atomic Boundary Size (Normal): 0 00:23:26.435 Atomic Boundary Size (PFail): 0 00:23:26.435 Atomic Boundary Offset: 0 00:23:26.435 NGUID/EUI64 Never Reused: No 00:23:26.435 ANA group ID: 1 00:23:26.435 Namespace Write Protected: No 00:23:26.435 Number of LBA Formats: 1 00:23:26.435 Current LBA Format: LBA Format #00 00:23:26.435 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:26.435 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.435 rmmod nvme_tcp 00:23:26.435 rmmod nvme_fabrics 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.435 12:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:28.964 12:00:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:29.530 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:29.530 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:29.530 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:29.788 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:29.788 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:29.788 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:29.788 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:29.788 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:29.788 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:30.731 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:30.731 00:23:30.731 real 0m9.152s 00:23:30.731 user 0m1.985s 00:23:30.731 sys 0m3.227s 00:23:30.731 12:00:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:30.731 12:00:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.731 ************************************ 00:23:30.731 END TEST nvmf_identify_kernel_target 00:23:30.731 ************************************ 00:23:30.732 12:00:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:30.732 12:00:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:30.732 12:00:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:30.732 12:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.994 ************************************ 00:23:30.994 START TEST nvmf_auth_host 00:23:30.994 ************************************ 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:30.994 * Looking for test storage... 00:23:30.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.994 12:00:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.995 12:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:32.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:32.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:32.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.894 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:32.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.895 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:33.221 00:23:33.221 --- 10.0.0.2 ping statistics --- 00:23:33.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.221 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:33.221 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:33.222 00:23:33.222 --- 10.0.0.1 ping statistics --- 00:23:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.222 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1011791 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1011791 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1011791 ']' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:33.222 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9fcff0b64dd2cc1a4b57f4fd3cdf9f6 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kuD 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9fcff0b64dd2cc1a4b57f4fd3cdf9f6 0 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9fcff0b64dd2cc1a4b57f4fd3cdf9f6 0 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9fcff0b64dd2cc1a4b57f4fd3cdf9f6 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kuD 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kuD 00:23:33.507 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kuD 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eff70a2270416610153f50585fb42dd521a79413be10e595a1035a283d5f0f38 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LhL 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eff70a2270416610153f50585fb42dd521a79413be10e595a1035a283d5f0f38 3 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eff70a2270416610153f50585fb42dd521a79413be10e595a1035a283d5f0f38 3 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eff70a2270416610153f50585fb42dd521a79413be10e595a1035a283d5f0f38 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LhL 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LhL 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LhL 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:33.508 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:33.765 12:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6bb9b1b85b81c792c43a6cdbf54b097ac3dd488827090e30 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.M7X 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6bb9b1b85b81c792c43a6cdbf54b097ac3dd488827090e30 0 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6bb9b1b85b81c792c43a6cdbf54b097ac3dd488827090e30 0 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6bb9b1b85b81c792c43a6cdbf54b097ac3dd488827090e30 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.M7X 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.M7X 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.M7X 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d63710ae0594beb43d37f6ffb3752ac2f88700f17f6f54ba 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Dk8 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d63710ae0594beb43d37f6ffb3752ac2f88700f17f6f54ba 2 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d63710ae0594beb43d37f6ffb3752ac2f88700f17f6f54ba 2 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d63710ae0594beb43d37f6ffb3752ac2f88700f17f6f54ba 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Dk8 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Dk8 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Dk8 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e0d2d4a4ebc1638866bd152c44d69c10 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WHh 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e0d2d4a4ebc1638866bd152c44d69c10 1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e0d2d4a4ebc1638866bd152c44d69c10 1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e0d2d4a4ebc1638866bd152c44d69c10 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WHh 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WHh 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WHh 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.765 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2377baa90970f85fa0227984f7855f69 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bBM 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2377baa90970f85fa0227984f7855f69 1 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2377baa90970f85fa0227984f7855f69 1 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2377baa90970f85fa0227984f7855f69 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bBM 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bBM 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bBM 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83b5495962f48028aedc4dff3bcce97fa3b14b29597b81c7 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IgK 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83b5495962f48028aedc4dff3bcce97fa3b14b29597b81c7 2 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83b5495962f48028aedc4dff3bcce97fa3b14b29597b81c7 2 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83b5495962f48028aedc4dff3bcce97fa3b14b29597b81c7 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:33.766 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IgK 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IgK 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IgK 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6e0dc9fbb280a8d8309a96ff5c614d9 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xA3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6e0dc9fbb280a8d8309a96ff5c614d9 0 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6e0dc9fbb280a8d8309a96ff5c614d9 0 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6e0dc9fbb280a8d8309a96ff5c614d9 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xA3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xA3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xA3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33fb3ad055a4d0d443ba1ac45cf44c7b15c5d1c3a5ef29ef6dd7a61f2917eaba 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9pR 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33fb3ad055a4d0d443ba1ac45cf44c7b15c5d1c3a5ef29ef6dd7a61f2917eaba 3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33fb3ad055a4d0d443ba1ac45cf44c7b15c5d1c3a5ef29ef6dd7a61f2917eaba 3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33fb3ad055a4d0d443ba1ac45cf44c7b15c5d1c3a5ef29ef6dd7a61f2917eaba 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9pR 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9pR 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9pR 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1011791 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1011791 ']' 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:34.024 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kuD 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LhL ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LhL 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.M7X 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Dk8 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Dk8 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WHh 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bBM ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bBM 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:34.282 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IgK 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xA3 ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xA3 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9pR 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:34.283 12:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:35.675 Waiting for block devices as requested 00:23:35.675 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:35.675 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:35.675 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:35.933 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:35.933 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:35.933 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:35.933 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:36.191 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:36.191 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:36.191 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:36.448 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:36.448 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:36.448 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:36.448 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:36.706 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:36.706 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:36.706 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:37.270 No valid GPT data, bailing 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:37.270 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:37.270 00:23:37.270 Discovery Log Number of Records 2, Generation counter 2 00:23:37.270 =====Discovery Log Entry 0====== 00:23:37.270 trtype: tcp 00:23:37.270 adrfam: ipv4 00:23:37.271 subtype: current discovery subsystem 00:23:37.271 treq: not specified, sq flow control disable supported 00:23:37.271 portid: 1 00:23:37.271 trsvcid: 4420 00:23:37.271 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:37.271 traddr: 10.0.0.1 00:23:37.271 eflags: none 00:23:37.271 sectype: none 00:23:37.271 =====Discovery Log Entry 1====== 00:23:37.271 trtype: tcp 00:23:37.271 adrfam: ipv4 00:23:37.271 subtype: nvme subsystem 00:23:37.271 treq: not specified, sq flow control disable supported 00:23:37.271 portid: 1 00:23:37.271 trsvcid: 4420 00:23:37.271 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:37.271 traddr: 10.0.0.1 00:23:37.271 eflags: none 00:23:37.271 sectype: none 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.271 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.528 nvme0n1 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.528 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.529 nvme0n1 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.529 12:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.529 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.786 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.787 nvme0n1 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.787 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.044 nvme0n1 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.044 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.045 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 nvme0n1 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.302 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 nvme0n1 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.560 12:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 nvme0n1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.819 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.077 nvme0n1 00:23:39.077 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.078 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.336 nvme0n1 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.336 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 nvme0n1 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.594 12:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.594 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.853 nvme0n1 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.853 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.111 nvme0n1 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.111 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.369 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.370 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.370 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.370 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.370 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.370 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.628 nvme0n1 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.628 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.629 12:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.887 nvme0n1 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.887 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.145 nvme0n1 00:23:41.145 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.402 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.403 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 nvme0n1 00:23:41.662 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.662 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.662 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.662 12:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 12:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.662 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.227 nvme0n1 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:42.227 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.228 12:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.792 nvme0n1 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.792 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.049 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.049 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.049 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.050 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.613 nvme0n1 00:23:43.613 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.613 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.613 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.614 12:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.178 nvme0n1 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.178 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.179 12:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.744 nvme0n1 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.744 12:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.676 nvme0n1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.676 12:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.048 nvme0n1 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.048 12:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 nvme0n1 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.989 12:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.918 nvme0n1 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.918 12:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.858 nvme0n1 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.858 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 nvme0n1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 nvme0n1 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.115 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 nvme0n1 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:50.384 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.385 12:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.648 nvme0n1 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.648 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.649 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.905 nvme0n1 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.905 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.906 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.163 nvme0n1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.163 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 nvme0n1 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.420 12:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.676 nvme0n1 00:23:51.676 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.676 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.676 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.676 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.677 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 nvme0n1 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.934 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.191 nvme0n1 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.191 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.192 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.449 nvme0n1 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.449 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.707 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.708 12:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.708 12:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.708 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.708 12:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 nvme0n1 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.965 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 nvme0n1 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.223 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.480 nvme0n1 00:23:53.480 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.481 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.481 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.481 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.481 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.481 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.737 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.737 12:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.737 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.737 12:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.738 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.995 nvme0n1 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.995 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.996 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.557 nvme0n1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.557 12:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.122 nvme0n1 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.122 12:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.687 nvme0n1 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.687 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.263 nvme0n1 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.263 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.519 12:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.082 nvme0n1 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.082 12:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 nvme0n1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.014 12:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.948 nvme0n1 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.948 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.206 12:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.146 nvme0n1 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.146 12:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.147 12:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.147 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.147 12:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.078 nvme0n1 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.078 12:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.010 nvme0n1 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.010 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.268 nvme0n1 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.268 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.269 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.527 nvme0n1 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.527 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.528 12:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 nvme0n1 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.786 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.044 nvme0n1 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.044 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.301 nvme0n1 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:24:03.301 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.302 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.566 nvme0n1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.566 12:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.829 nvme0n1 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.829 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.086 nvme0n1 00:24:04.086 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.086 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.086 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.086 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.086 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.087 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.345 nvme0n1 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.345 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.604 nvme0n1 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.604 12:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.862 nvme0n1 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.862 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.863 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.121 nvme0n1 00:24:05.121 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.121 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.121 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.121 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.121 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.383 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.384 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 nvme0n1 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.641 12:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.641 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.900 nvme0n1 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.900 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.159 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 nvme0n1 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.417 12:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.984 nvme0n1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.984 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.612 nvme0n1 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:24:07.612 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.613 12:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.189 nvme0n1 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.189 12:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.755 nvme0n1 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.755 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.756 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.320 nvme0n1 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.320 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmY2ZmMGI2NGRkMmNjMWE0YjU3ZjRmZDNjZGY5Zja7gsrA: 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: ]] 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZmNzBhMjI3MDQxNjYxMDE1M2Y1MDU4NWZiNDJkZDUyMWE3OTQxM2JlMTBlNTk1YTEwMzVhMjgzZDVmMGYzODFjLBQ=: 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:09.576 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.577 12:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.508 nvme0n1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.508 12:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.440 nvme0n1 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBkMmQ0YTRlYmMxNjM4ODY2YmQxNTJjNDRkNjljMTCTI2KG: 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM3N2JhYTkwOTcwZjg1ZmEwMjI3OTg0Zjc4NTVmNjl7RPEg: 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.440 12:01:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.811 nvme0n1 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNiNTQ5NTk2MmY0ODAyOGFlZGM0ZGZmM2JjY2U5N2ZhM2IxNGIyOTU5N2I4MWM36D+ruw==: 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZlMGRjOWZiYjI4MGE4ZDgzMDlhOTZmZjVjNjE0ZDnKTKaT: 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.811 12:01:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 nvme0n1 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.743 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzNmYjNhZDA1NWE0ZDBkNDQzYmExYWM0NWNmNDRjN2IxNWM1ZDFjM2E1ZWYyOWVmNmRkN2E2MWYyOTE3ZWFiYcOi7LM=: 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.744 12:01:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.744 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.678 nvme0n1 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJiOWIxYjg1YjgxYzc5MmM0M2E2Y2RiZjU0YjA5N2FjM2RkNDg4ODI3MDkwZTMwli/n8A==: 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDYzNzEwYWUwNTk0YmViNDNkMzdmNmZmYjM3NTJhYzJmODg3MDBmMTdmNmY1NGJhrEJ2rw==: 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.678 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.679 12:01:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.679 request: 00:24:14.679 { 00:24:14.679 "name": "nvme0", 00:24:14.679 "trtype": "tcp", 00:24:14.679 "traddr": "10.0.0.1", 00:24:14.679 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:14.679 "adrfam": "ipv4", 00:24:14.679 "trsvcid": "4420", 00:24:14.679 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:14.679 "method": "bdev_nvme_attach_controller", 00:24:14.679 "req_id": 1 00:24:14.679 } 00:24:14.679 Got JSON-RPC error response 00:24:14.679 response: 00:24:14.679 { 00:24:14.679 "code": -5, 00:24:14.679 "message": "Input/output error" 00:24:14.679 } 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.679 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.937 request: 00:24:14.937 { 00:24:14.937 "name": "nvme0", 00:24:14.937 "trtype": "tcp", 00:24:14.937 "traddr": "10.0.0.1", 00:24:14.937 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:14.937 "adrfam": "ipv4", 00:24:14.937 "trsvcid": "4420", 00:24:14.937 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:14.937 "dhchap_key": "key2", 00:24:14.937 "method": "bdev_nvme_attach_controller", 00:24:14.937 "req_id": 1 00:24:14.937 } 00:24:14.937 Got JSON-RPC error response 00:24:14.937 response: 00:24:14.937 { 00:24:14.937 "code": -5, 00:24:14.937 "message": "Input/output error" 00:24:14.937 } 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.937 request: 00:24:14.937 { 00:24:14.937 "name": "nvme0", 00:24:14.937 "trtype": "tcp", 00:24:14.937 "traddr": "10.0.0.1", 00:24:14.937 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:14.937 "adrfam": "ipv4", 00:24:14.937 "trsvcid": "4420", 00:24:14.937 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:14.937 "dhchap_key": "key1", 00:24:14.937 "dhchap_ctrlr_key": "ckey2", 00:24:14.937 "method": "bdev_nvme_attach_controller", 00:24:14.937 "req_id": 1 00:24:14.937 } 00:24:14.937 Got JSON-RPC error response 00:24:14.937 response: 00:24:14.937 { 00:24:14.937 "code": -5, 00:24:14.937 "message": "Input/output error" 00:24:14.937 } 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.937 rmmod nvme_tcp 00:24:14.937 rmmod nvme_fabrics 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1011791 ']' 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1011791 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1011791 ']' 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1011791 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1011791 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1011791' 00:24:14.937 killing process with pid 1011791 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1011791 00:24:14.937 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1011791 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.196 12:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:17.728 12:01:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:18.661 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:18.661 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:18.661 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:19.597 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:19.597 12:01:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kuD /tmp/spdk.key-null.M7X /tmp/spdk.key-sha256.WHh /tmp/spdk.key-sha384.IgK /tmp/spdk.key-sha512.9pR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:19.597 12:01:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:20.978 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:20.978 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:20.978 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:20.978 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:20.978 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:20.978 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:20.978 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:20.978 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:20.978 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:20.978 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:20.978 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:20.978 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:20.978 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:20.978 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:20.978 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:20.978 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:20.978 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:20.978 00:24:20.978 real 0m50.042s 00:24:20.978 user 0m47.829s 00:24:20.978 sys 0m5.917s 00:24:20.978 12:01:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:20.978 12:01:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.978 ************************************ 00:24:20.978 END TEST nvmf_auth_host 00:24:20.978 ************************************ 00:24:20.978 12:01:10 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:24:20.979 12:01:10 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:20.979 12:01:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:20.979 12:01:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:20.979 12:01:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.979 ************************************ 00:24:20.979 START TEST nvmf_digest 00:24:20.979 ************************************ 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:20.979 * Looking for test storage... 00:24:20.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.979 12:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:23.508 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.508 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:23.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:23.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:23.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:24:23.509 00:24:23.509 --- 10.0.0.2 ping statistics --- 00:24:23.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.509 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:24:23.509 00:24:23.509 --- 10.0.0.1 ping statistics --- 00:24:23.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.509 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 ************************************ 00:24:23.509 START TEST nvmf_digest_clean 00:24:23.509 ************************************ 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1021239 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1021239 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1021239 ']' 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:23.509 12:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 [2024-07-12 12:01:12.650335] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:23.509 [2024-07-12 12:01:12.650429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.509 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.509 [2024-07-12 12:01:12.720161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.509 [2024-07-12 12:01:12.839289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.509 [2024-07-12 12:01:12.839344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.509 [2024-07-12 12:01:12.839361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.509 [2024-07-12 12:01:12.839375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.509 [2024-07-12 12:01:12.839387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.509 [2024-07-12 12:01:12.839428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:24.441 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.442 null0 00:24:24.442 [2024-07-12 12:01:13.712572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.442 [2024-07-12 12:01:13.736789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1021390 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1021390 /var/tmp/bperf.sock 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1021390 ']' 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.442 12:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.442 [2024-07-12 12:01:13.787407] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:24.442 [2024-07-12 12:01:13.787486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021390 ] 00:24:24.442 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.442 [2024-07-12 12:01:13.854822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.700 [2024-07-12 12:01:13.976099] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.265 12:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:25.265 12:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:24:25.265 12:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:25.265 12:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:25.265 12:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:25.831 12:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.831 12:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.088 nvme0n1 00:24:26.088 12:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:26.088 12:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.346 Running I/O for 2 seconds... 00:24:28.253 00:24:28.253 Latency(us) 00:24:28.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.253 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:28.253 nvme0n1 : 2.00 18248.28 71.28 0.00 0.00 7006.04 3519.53 16117.00 00:24:28.253 =================================================================================================================== 00:24:28.253 Total : 18248.28 71.28 0.00 0.00 7006.04 3519.53 16117.00 00:24:28.253 0 00:24:28.253 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:28.253 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:28.253 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:28.253 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:28.253 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:28.253 | select(.opcode=="crc32c") 00:24:28.253 | "\(.module_name) \(.executed)"' 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1021390 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1021390 ']' 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1021390 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1021390 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1021390' 00:24:28.566 killing process with pid 1021390 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1021390 00:24:28.566 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.566 00:24:28.566 Latency(us) 00:24:28.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.566 =================================================================================================================== 00:24:28.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.566 12:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1021390 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1021933 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1021933 /var/tmp/bperf.sock 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1021933 ']' 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:28.823 12:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:28.823 [2024-07-12 12:01:18.262299] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:28.823 [2024-07-12 12:01:18.262395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021933 ] 00:24:28.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.823 Zero copy mechanism will not be used. 00:24:28.823 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.080 [2024-07-12 12:01:18.325978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.080 [2024-07-12 12:01:18.443675] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.010 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:30.010 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:24:30.010 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:30.010 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:30.010 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:30.267 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.267 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.523 nvme0n1 00:24:30.523 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:30.523 12:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.780 Zero copy mechanism will not be used. 00:24:30.780 Running I/O for 2 seconds... 00:24:32.676 00:24:32.676 Latency(us) 00:24:32.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.676 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:32.676 nvme0n1 : 2.00 5528.95 691.12 0.00 0.00 2889.30 758.52 11116.85 00:24:32.676 =================================================================================================================== 00:24:32.676 Total : 5528.95 691.12 0.00 0.00 2889.30 758.52 11116.85 00:24:32.676 0 00:24:32.676 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:32.676 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:32.676 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:32.676 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:32.676 | select(.opcode=="crc32c") 00:24:32.676 | "\(.module_name) \(.executed)"' 00:24:32.676 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1021933 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1021933 ']' 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1021933 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1021933 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1021933' 00:24:32.934 killing process with pid 1021933 00:24:32.934 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1021933 00:24:32.934 Received shutdown signal, test time was about 2.000000 seconds 00:24:32.934 00:24:32.934 Latency(us) 00:24:32.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.934 =================================================================================================================== 00:24:32.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.192 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1021933 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1022465 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1022465 /var/tmp/bperf.sock 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1022465 ']' 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:33.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:33.449 12:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.449 [2024-07-12 12:01:22.755690] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:33.449 [2024-07-12 12:01:22.755786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022465 ] 00:24:33.449 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.449 [2024-07-12 12:01:22.819428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.449 [2024-07-12 12:01:22.937496] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.380 12:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:34.380 12:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:24:34.380 12:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:34.380 12:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:34.380 12:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:34.636 12:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.636 12:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.200 nvme0n1 00:24:35.200 12:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:35.200 12:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.200 Running I/O for 2 seconds... 00:24:37.727 00:24:37.727 Latency(us) 00:24:37.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.727 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:37.727 nvme0n1 : 2.01 18415.61 71.94 0.00 0.00 6933.04 6165.24 15049.01 00:24:37.727 =================================================================================================================== 00:24:37.727 Total : 18415.61 71.94 0.00 0.00 6933.04 6165.24 15049.01 00:24:37.727 0 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:37.727 | select(.opcode=="crc32c") 00:24:37.727 | "\(.module_name) \(.executed)"' 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1022465 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1022465 ']' 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1022465 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1022465 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1022465' 00:24:37.727 killing process with pid 1022465 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1022465 00:24:37.727 Received shutdown signal, test time was about 2.000000 seconds 00:24:37.727 00:24:37.727 Latency(us) 00:24:37.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.727 =================================================================================================================== 00:24:37.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.727 12:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1022465 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1023008 00:24:37.727 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1023008 /var/tmp/bperf.sock 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1023008 ']' 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:37.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:37.728 12:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:37.986 [2024-07-12 12:01:27.260664] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:37.986 [2024-07-12 12:01:27.260758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023008 ] 00:24:37.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:37.986 Zero copy mechanism will not be used. 00:24:37.986 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.986 [2024-07-12 12:01:27.323595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.986 [2024-07-12 12:01:27.445789] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.919 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:38.919 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:24:38.919 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:38.919 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:38.919 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:39.177 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.177 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.742 nvme0n1 00:24:39.742 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:39.742 12:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.742 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.742 Zero copy mechanism will not be used. 00:24:39.742 Running I/O for 2 seconds... 00:24:41.640 00:24:41.640 Latency(us) 00:24:41.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.640 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:41.640 nvme0n1 : 2.00 5356.45 669.56 0.00 0.00 2979.06 2330.17 12621.75 00:24:41.640 =================================================================================================================== 00:24:41.640 Total : 5356.45 669.56 0.00 0.00 2979.06 2330.17 12621.75 00:24:41.640 0 00:24:41.640 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:41.640 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:41.640 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:41.640 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:41.640 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:41.640 | select(.opcode=="crc32c") 00:24:41.640 | "\(.module_name) \(.executed)"' 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1023008 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1023008 ']' 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1023008 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1023008 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:41.897 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1023008' 00:24:41.897 killing process with pid 1023008 00:24:41.898 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1023008 00:24:41.898 Received shutdown signal, test time was about 2.000000 seconds 00:24:41.898 00:24:41.898 Latency(us) 00:24:41.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.898 =================================================================================================================== 00:24:41.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.898 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1023008 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1021239 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1021239 ']' 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1021239 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1021239 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1021239' 00:24:42.463 killing process with pid 1021239 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1021239 00:24:42.463 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1021239 00:24:42.721 00:24:42.721 real 0m19.369s 00:24:42.721 user 0m38.367s 00:24:42.721 sys 0m4.680s 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:42.721 ************************************ 00:24:42.721 END TEST nvmf_digest_clean 00:24:42.721 ************************************ 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:42.721 12:01:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:42.721 ************************************ 00:24:42.721 START TEST nvmf_digest_error 00:24:42.721 ************************************ 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1023582 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1023582 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1023582 ']' 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:42.721 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.721 [2024-07-12 12:01:32.075917] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:42.721 [2024-07-12 12:01:32.075999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.721 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.721 [2024-07-12 12:01:32.142528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.979 [2024-07-12 12:01:32.252363] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.979 [2024-07-12 12:01:32.252419] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.979 [2024-07-12 12:01:32.252433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.979 [2024-07-12 12:01:32.252443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.979 [2024-07-12 12:01:32.252452] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.979 [2024-07-12 12:01:32.252479] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.979 [2024-07-12 12:01:32.305048] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.979 null0 00:24:42.979 [2024-07-12 12:01:32.427773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.979 [2024-07-12 12:01:32.452003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1023610 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1023610 /var/tmp/bperf.sock 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1023610 ']' 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:42.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:42.979 12:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:43.238 [2024-07-12 12:01:32.502101] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:43.238 [2024-07-12 12:01:32.502181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023610 ] 00:24:43.238 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.238 [2024-07-12 12:01:32.569262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.238 [2024-07-12 12:01:32.685509] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.169 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:44.169 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:24:44.170 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:44.170 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.427 12:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.685 nvme0n1 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:44.685 12:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.943 Running I/O for 2 seconds... 00:24:44.943 [2024-07-12 12:01:34.281724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.943 [2024-07-12 12:01:34.281775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.943 [2024-07-12 12:01:34.281797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.943 [2024-07-12 12:01:34.297012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.943 [2024-07-12 12:01:34.297050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.943 [2024-07-12 12:01:34.297070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.943 [2024-07-12 12:01:34.308815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.943 [2024-07-12 12:01:34.308851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.308881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.327563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.327598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.327617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.342585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.342620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.342639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.354449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.354503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.369701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.369735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.369755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.381983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.382017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.382036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.398720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.398762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.398783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.414056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.414091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.414111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.944 [2024-07-12 12:01:34.426392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:44.944 [2024-07-12 12:01:34.426426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.944 [2024-07-12 12:01:34.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.201 [2024-07-12 12:01:34.442991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.201 [2024-07-12 12:01:34.443028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.201 [2024-07-12 12:01:34.443049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.201 [2024-07-12 12:01:34.455036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.201 [2024-07-12 12:01:34.455072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.201 [2024-07-12 12:01:34.455091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.201 [2024-07-12 12:01:34.469216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.201 [2024-07-12 12:01:34.469251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.201 [2024-07-12 12:01:34.469271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.201 [2024-07-12 12:01:34.483229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.483263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.483282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.496839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.496881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.496904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.510343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.510378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.510398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.524408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.524442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.524461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.536370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.536423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.550658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.550692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.550711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.568670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.568704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.568724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.580756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.580790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.580809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.597532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.597567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.597586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.614650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.614685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.614704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.632690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.632724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.632743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.644616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.644649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.644675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.661616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.661650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.661672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.675644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.675679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.675699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.202 [2024-07-12 12:01:34.688821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.202 [2024-07-12 12:01:34.688855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.202 [2024-07-12 12:01:34.688885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.705959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.705996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.706017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.719325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.719360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.719380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.732916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.732950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.732970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.745566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.745599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.745619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.760652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.760686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.760705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.772649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.772697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.786425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.786459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.786478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.801819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.801853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.801883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.815265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.815299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.815319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.826681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.826714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.826733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.843171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.843205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.843236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.858740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.858773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.858794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.870471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.870503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.870522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.887650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.887683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.887704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.899204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.899241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.899260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.915759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.460 [2024-07-12 12:01:34.915792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.460 [2024-07-12 12:01:34.915811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.460 [2024-07-12 12:01:34.930934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.461 [2024-07-12 12:01:34.930967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.461 [2024-07-12 12:01:34.930988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.461 [2024-07-12 12:01:34.942787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.461 [2024-07-12 12:01:34.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.461 [2024-07-12 12:01:34.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:34.960558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:34.960594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:34.960614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:34.971520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:34.971554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:34.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:34.988358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:34.988392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:34.988413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.003373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.003426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.014880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.014912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.014940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.029735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.029769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.045500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.045533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.045552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.058500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.058544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.058563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.071362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.071415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.084624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.084658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.084678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.097761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.097795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.097822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.111608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.111644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.111667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.126047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.126082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.126102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.137627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.137665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.137685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.155682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.155716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.155736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.169949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.169983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.170002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.182140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.182184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.182203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.195973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.718 [2024-07-12 12:01:35.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.718 [2024-07-12 12:01:35.196026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.718 [2024-07-12 12:01:35.208330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.719 [2024-07-12 12:01:35.208362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.719 [2024-07-12 12:01:35.208382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.224067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.224125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.235406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.235438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.235464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.250992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.251024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.251055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.262471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.262503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.262522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.277283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.277317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.277336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.290303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.290336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.290356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.303374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.303406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.303425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.316688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.316721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.316746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.330035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.976 [2024-07-12 12:01:35.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.976 [2024-07-12 12:01:35.330088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.976 [2024-07-12 12:01:35.344094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.344147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.358247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.358279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.358298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.371113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.371152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.371172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.385527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.385560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.396809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.396842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.396861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.411518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.411550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.411569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.425520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.425554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.425573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.438354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.438388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.438408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.453671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.453704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.453724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.977 [2024-07-12 12:01:35.467343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:45.977 [2024-07-12 12:01:35.467375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.977 [2024-07-12 12:01:35.467394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.481625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.481682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.493913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.493947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.493966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.507190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.507223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.507243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.520305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.520337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.520357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.534080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.534114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.534134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.547515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.547548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.547567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.560817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.560849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.237 [2024-07-12 12:01:35.560875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.237 [2024-07-12 12:01:35.574103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.237 [2024-07-12 12:01:35.574135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.588312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.588354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.603964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.603998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.604023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.615541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.615593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.631306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.631340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.631359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.647011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.647045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.647065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.657859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.657900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.657919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.674164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.674218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.689484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.689519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.689539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.700983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.238 [2024-07-12 12:01:35.718353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.238 [2024-07-12 12:01:35.718387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.238 [2024-07-12 12:01:35.718406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.733957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.733998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.734018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.746635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.746690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.762872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.762905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.762925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.775162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.775196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.775215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.789635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.789669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.789689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.805338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.805372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.805391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.817282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.817316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.817335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.833121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.833156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.833176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.844141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.496 [2024-07-12 12:01:35.844175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-07-12 12:01:35.844194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-07-12 12:01:35.861942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.861996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.877902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.877935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.877954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.889896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.889943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.889963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.903901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.903934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.903954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.917191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.917224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.917243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.932388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.932422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.932443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.945267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.945301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.945320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.960253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.960287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.960306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.976272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.976306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.976332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.497 [2024-07-12 12:01:35.987080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.497 [2024-07-12 12:01:35.987112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-07-12 12:01:35.987132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.754 [2024-07-12 12:01:36.003730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.754 [2024-07-12 12:01:36.003765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-12 12:01:36.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.020416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.020450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.020469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.034176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.034210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.034229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.047116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.047149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.047169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.060539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.060572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.060591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.073795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.073829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.073849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.087823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.087857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.087885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.101334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.101389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.114759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.114794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.114813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.127219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.127254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.127274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.143141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.143177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.143196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.154130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.154165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.154184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.170419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.170473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.186351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.200204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.200239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.200258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.214191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.214225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.214250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.226444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.226477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.226497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.755 [2024-07-12 12:01:36.242127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:46.755 [2024-07-12 12:01:36.242160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.755 [2024-07-12 12:01:36.242180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-12 12:01:36.255129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:47.013 [2024-07-12 12:01:36.255163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-12 12:01:36.255182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-12 12:01:36.266632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x744640) 00:24:47.013 [2024-07-12 12:01:36.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-12 12:01:36.266685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 00:24:47.013 Latency(us) 00:24:47.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:47.013 nvme0n1 : 2.00 18081.14 70.63 0.00 0.00 7068.51 3859.34 19126.80 00:24:47.013 =================================================================================================================== 00:24:47.013 Total : 18081.14 70.63 0.00 0.00 7068.51 3859.34 19126.80 00:24:47.013 0 00:24:47.013 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:47.013 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:47.013 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:47.013 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:47.013 | .driver_specific 00:24:47.013 | .nvme_error 00:24:47.013 | .status_code 00:24:47.013 | .command_transient_transport_error' 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1023610 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1023610 ']' 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1023610 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1023610 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1023610' 00:24:47.301 killing process with pid 1023610 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1023610 00:24:47.301 Received shutdown signal, test time was about 2.000000 seconds 00:24:47.301 00:24:47.301 Latency(us) 00:24:47.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.301 =================================================================================================================== 00:24:47.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.301 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1023610 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1024148 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1024148 /var/tmp/bperf.sock 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1024148 ']' 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.558 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:47.559 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.559 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:47.559 12:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.559 [2024-07-12 12:01:36.912246] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:47.559 [2024-07-12 12:01:36.912342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024148 ] 00:24:47.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:47.559 Zero copy mechanism will not be used. 00:24:47.559 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.559 [2024-07-12 12:01:36.971741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.829 [2024-07-12 12:01:37.079535] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.829 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:47.829 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:24:47.829 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:47.829 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.088 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.658 nvme0n1 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:48.658 12:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:48.658 Zero copy mechanism will not be used. 00:24:48.658 Running I/O for 2 seconds... 00:24:48.658 [2024-07-12 12:01:37.989526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:37.989587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:37.989615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:37.995893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:37.995944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:37.995968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.001168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.001220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.001241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.006618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.006654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.006673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.012408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.012447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.012473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.016366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.016411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.016432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.021128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.021159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.021190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.027490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.027522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.027539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.033493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.033530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.658 [2024-07-12 12:01:38.033549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.658 [2024-07-12 12:01:38.039479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.658 [2024-07-12 12:01:38.039516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.045718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.045755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.045775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.051502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.051538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.051557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.057530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.057586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.063708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.070216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.070253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.070273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.076628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.076664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.076684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.083596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.083632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.083652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.090407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.090444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.090464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.096525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.096562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.100823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.100858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.100907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.107788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.107824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.114646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.114681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.114701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.121329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.127524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.127560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.127581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.134360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.134396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.134415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.140967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.141013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.141030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.659 [2024-07-12 12:01:38.147862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.659 [2024-07-12 12:01:38.147922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.659 [2024-07-12 12:01:38.147940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.154923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.154955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.154972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.161849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.161894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.161927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.168694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.168730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.168750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.175921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.175952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.175969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.183469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.183505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.183524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.192175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.192222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.192243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.199238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.199275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.199295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.206204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.206241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.206261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.212136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.212184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.212202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.218437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.218473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.218493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.224499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.224535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.224554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.230595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.230631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.230650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.236160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.236210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.236236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.242214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.242250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.242269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.248146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.248178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.248196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.254161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.254212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.254232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.260234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.260270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.260290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.266138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.266186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.266206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.272398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.272435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.272455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.278524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.278559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.278579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.284639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.284675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.284694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.290859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.290909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.290944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.296994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.297025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.297043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.921 [2024-07-12 12:01:38.302633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.921 [2024-07-12 12:01:38.302670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.921 [2024-07-12 12:01:38.302692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.308974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.309023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.315025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.315057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.315074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.321413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.321448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.328727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.328762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.328782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.336491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.336527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.344158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.344190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.344225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.351823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.351858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.351886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.359532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.359568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.359588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.367295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.367330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.375004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.375036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.375069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.382726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.382761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.390450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.390486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.390506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.398115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.398162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.398181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.922 [2024-07-12 12:01:38.405922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:48.922 [2024-07-12 12:01:38.405954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.922 [2024-07-12 12:01:38.405987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.413740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.413778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.413803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.421495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.421530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.421549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.429083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.429115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.429131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.436769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.436805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.436824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.443840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.443884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.443918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.450236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.450271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.450291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.456783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.181 [2024-07-12 12:01:38.456818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.181 [2024-07-12 12:01:38.456837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.181 [2024-07-12 12:01:38.462970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.463002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.463019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.469107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.469140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.469158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.475147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.475185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.475203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.481540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.481577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.481597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.487691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.487730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.487750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.493971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.494003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.499900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.499958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.499977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.506069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.506102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.506119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.511997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.512031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.512050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.517942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.517974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.524047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.524079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.524097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.527538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.527572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.527591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.533799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.533855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.540251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.540306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.546604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.546641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.546660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.552673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.552729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.558403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.558439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.558459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.564138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.564171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.564189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.570703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.570740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.570764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.577835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.577882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.577902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.584877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.584910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.584928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.593267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.593303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.593323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.599874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.599942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.605625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.605656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.605674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.611781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.611817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.611836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.619113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.619144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.619161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.626883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.626931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.626948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.633526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.633561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.182 [2024-07-12 12:01:38.633581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.182 [2024-07-12 12:01:38.640158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.182 [2024-07-12 12:01:38.640206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.640226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.183 [2024-07-12 12:01:38.646875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.183 [2024-07-12 12:01:38.646924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.646941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.183 [2024-07-12 12:01:38.651368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.183 [2024-07-12 12:01:38.651402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.651421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.183 [2024-07-12 12:01:38.656603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.183 [2024-07-12 12:01:38.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.656658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.183 [2024-07-12 12:01:38.662864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.183 [2024-07-12 12:01:38.662923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.662940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.183 [2024-07-12 12:01:38.669829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.183 [2024-07-12 12:01:38.669864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.183 [2024-07-12 12:01:38.669909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.441 [2024-07-12 12:01:38.677553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.441 [2024-07-12 12:01:38.677588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.441 [2024-07-12 12:01:38.677608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.441 [2024-07-12 12:01:38.684131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.441 [2024-07-12 12:01:38.684162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.684197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.690685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.690721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.690746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.697058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.697093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.697112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.703755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.703790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.703810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.711032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.711065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.711098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.719050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.719100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.726629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.726665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.726684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.733920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.733965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.733982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.741297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.741353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.749918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.749953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.749972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.757599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.757640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.765235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.765270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.765290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.772840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.772882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.772904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.780047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.787068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.787113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.787130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.794640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.794695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.801456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.801492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.801511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.808368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.808404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.808424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.814908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.814943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.814963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.821378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.821413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.821432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.827945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.827979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.827999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.834284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.840582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.840617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.840636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.847262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.847297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.847317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.853964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.853997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.854029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.860039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.860086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.860104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.866608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.866643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.442 [2024-07-12 12:01:38.866663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.442 [2024-07-12 12:01:38.873316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.442 [2024-07-12 12:01:38.873351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.873376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.879572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.879608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.879628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.886078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.886111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.886128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.893136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.893167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.893202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.899848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.899891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.899913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.906596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.906631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.906651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.912693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.912728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.912748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.918714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.924781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.924815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.924835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.443 [2024-07-12 12:01:38.931054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.443 [2024-07-12 12:01:38.931090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.443 [2024-07-12 12:01:38.931109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.707 [2024-07-12 12:01:38.937124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.707 [2024-07-12 12:01:38.937156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.707 [2024-07-12 12:01:38.937190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.707 [2024-07-12 12:01:38.943314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.707 [2024-07-12 12:01:38.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.707 [2024-07-12 12:01:38.943369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.707 [2024-07-12 12:01:38.949603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.707 [2024-07-12 12:01:38.949638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.949658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.955450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.955484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.955503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.961479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.961515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.961534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.967360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.967395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.967414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.973203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.973239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.973260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.979470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.979505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.979524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.983017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.983047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.988795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.988831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.988850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:38.994703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:38.994738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:38.994757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.000674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.000709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.000728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.006540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.006575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.006594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.012572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.012606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.012625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.018195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.018244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.018263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.023570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.023604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.023623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.029615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.029659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.029679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.035681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.035716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.042089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.042120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.042138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.048318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.048353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.048373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.055661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.055716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.063401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.063436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.063455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.071214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.071249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.071269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.078897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.078932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.086575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.086610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.086629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.094324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.094360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.094379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.102079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.102111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.102128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.109665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.109700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.109719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.117377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.117413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.117432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.125238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.125274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.125293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.133086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.133121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.133140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.140820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.140855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.140886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.148362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.148397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.148416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.156015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.156050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.156075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.163748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.163783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.163803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.170775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.170810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.170829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.176772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.176811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.176832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.183024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.183059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.708 [2024-07-12 12:01:39.183078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.708 [2024-07-12 12:01:39.189652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.708 [2024-07-12 12:01:39.189687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.709 [2024-07-12 12:01:39.189706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.709 [2024-07-12 12:01:39.196172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.709 [2024-07-12 12:01:39.196207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.709 [2024-07-12 12:01:39.196227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.709 [2024-07-12 12:01:39.199798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.709 [2024-07-12 12:01:39.199832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.709 [2024-07-12 12:01:39.199855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.204387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.204423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.204443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.210185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.210226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.210246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.216248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.216283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.216303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.222384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.222420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.222439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.226780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.231833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.231898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.238201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.238237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.238257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.244962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.244997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.245017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.251602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.251638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.251658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.258750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.258786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.258806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.265982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.266017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.266037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.273042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.273078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.273098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.280820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.280856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.280886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.288556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.288592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.288612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.296297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.296333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.296352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.304221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.304257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.304277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.312142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.312198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.320041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.320077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.327905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.327940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.327966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.335346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.335383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.335402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.343531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.343566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.343586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.351441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.351477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.359330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.359367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.359387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.367017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.367052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.367073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.374675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.374711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.382354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.382390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.390112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.390147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.396696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.396732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.396751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.403043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.403098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.409572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.409609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.409629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.416079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.416113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.416133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.422452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.422488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.422508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.968 [2024-07-12 12:01:39.429065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.968 [2024-07-12 12:01:39.429101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.968 [2024-07-12 12:01:39.429132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.969 [2024-07-12 12:01:39.436657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.969 [2024-07-12 12:01:39.436691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.969 [2024-07-12 12:01:39.436711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.969 [2024-07-12 12:01:39.443824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.969 [2024-07-12 12:01:39.443859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.969 [2024-07-12 12:01:39.443890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.969 [2024-07-12 12:01:39.450250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.969 [2024-07-12 12:01:39.450285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.969 [2024-07-12 12:01:39.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.969 [2024-07-12 12:01:39.456694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:49.969 [2024-07-12 12:01:39.456729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.969 [2024-07-12 12:01:39.456748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.463280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.463315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.463335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.469933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.469968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.469987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.475516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.475551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.475571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.481475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.481511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.481530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.487652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.487686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.487706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.493748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.493783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.227 [2024-07-12 12:01:39.493803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.227 [2024-07-12 12:01:39.497882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.227 [2024-07-12 12:01:39.497921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.497940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.502770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.502812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.502832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.508983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.509019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.509039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.515481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.515517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.515537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.521989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.522025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.522045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.528298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.528334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.528353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.535018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.535055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.535074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.541772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.541808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.541828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.548298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.548334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.548353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.554961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.554997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.555016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.561613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.561649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.561669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.568289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.568325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.568345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.575207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.575243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.575263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.581912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.581948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.581974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.587770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.587808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.587832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.594717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.594752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.594772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.600758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.600795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.600814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.607109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.607144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.607163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.613601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.613637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.613663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.620003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.620040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.620060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.626151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.626186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.626206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.632646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.632682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.632702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.639082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.639118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.645316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.645351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.645371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.651338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.651373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.651393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.657396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.657432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.657452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.663665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.663700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.663720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.667820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.667861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.667897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.672219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.672255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.228 [2024-07-12 12:01:39.672275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.228 [2024-07-12 12:01:39.678295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.228 [2024-07-12 12:01:39.678331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.678351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.684322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.684357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.684377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.690369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.690405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.690425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.696492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.696528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.696548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.702191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.702226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.702246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.708222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.708258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.714210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.229 [2024-07-12 12:01:39.714246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.229 [2024-07-12 12:01:39.714265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.229 [2024-07-12 12:01:39.720160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.720219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.726228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.726286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.732214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.732249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.732270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.738188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.738224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.738243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.744693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.744749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.750961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.751016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.757804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.757841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.757861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.763994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.764029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.764049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.770238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.770273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.776291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.776346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.782673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.782710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.782730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.789366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.789402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.789422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.795631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.795666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.795686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.801852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.801896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.801917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.808076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.808112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.808132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.814151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.814187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.814206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.820473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.820510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.820529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.826525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.826562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.826583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.832504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.832540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.832559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.838697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.838733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.838753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.845379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.845415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.845435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.851800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.851836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.851856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.858103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.858139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.858160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.865781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.865837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.873540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.873577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.873597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.489 [2024-07-12 12:01:39.881740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.489 [2024-07-12 12:01:39.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.489 [2024-07-12 12:01:39.881804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.888777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.888813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.888833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.896559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.896595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.896615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.904325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.904361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.904381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.912771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.912807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.912827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.919964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.920000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.920025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.927744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.927780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.927800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.935566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.935602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.935622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.944163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.944200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.944219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.952133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.952175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.960449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.960485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.960505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.968135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.968171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.968192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.490 [2024-07-12 12:01:39.975680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.490 [2024-07-12 12:01:39.975716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.490 [2024-07-12 12:01:39.975735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.748 [2024-07-12 12:01:39.983050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f63d0) 00:24:50.748 [2024-07-12 12:01:39.983087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.748 [2024-07-12 12:01:39.983107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.748 00:24:50.748 Latency(us) 00:24:50.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.748 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:50.748 nvme0n1 : 2.00 4708.78 588.60 0.00 0.00 3393.43 819.20 8883.77 00:24:50.748 =================================================================================================================== 00:24:50.748 Total : 4708.78 588.60 0.00 0.00 3393.43 819.20 8883.77 00:24:50.748 0 00:24:50.748 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:50.748 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:50.748 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:50.748 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:50.748 | .driver_specific 00:24:50.748 | .nvme_error 00:24:50.748 | .status_code 00:24:50.748 | .command_transient_transport_error' 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 303 > 0 )) 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1024148 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1024148 ']' 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1024148 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1024148 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1024148' 00:24:51.006 killing process with pid 1024148 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1024148 00:24:51.006 Received shutdown signal, test time was about 2.000000 seconds 00:24:51.006 00:24:51.006 Latency(us) 00:24:51.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.006 =================================================================================================================== 00:24:51.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.006 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1024148 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1024562 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1024562 /var/tmp/bperf.sock 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1024562 ']' 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:51.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:51.265 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.265 [2024-07-12 12:01:40.665806] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:51.265 [2024-07-12 12:01:40.665909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024562 ] 00:24:51.265 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.265 [2024-07-12 12:01:40.735232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.523 [2024-07-12 12:01:40.862761] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.523 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:51.523 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:24:51.523 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:51.523 12:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.092 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:52.093 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.093 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.093 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.093 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.093 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.370 nvme0n1 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:52.370 12:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:52.370 Running I/O for 2 seconds... 00:24:52.370 [2024-07-12 12:01:41.782486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ee5c8 00:24:52.370 [2024-07-12 12:01:41.783543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.783589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.794749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fac10 00:24:52.370 [2024-07-12 12:01:41.795758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.795793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.808164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eaef0 00:24:52.370 [2024-07-12 12:01:41.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.809390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.821049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fda78 00:24:52.370 [2024-07-12 12:01:41.821725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.821759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.836301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f0ff8 00:24:52.370 [2024-07-12 12:01:41.838239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.838272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.845642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e3060 00:24:52.370 [2024-07-12 12:01:41.846516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.846547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:52.370 [2024-07-12 12:01:41.859115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ee5c8 00:24:52.370 [2024-07-12 12:01:41.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.370 [2024-07-12 12:01:41.860264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.874590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f9f68 00:24:52.635 [2024-07-12 12:01:41.876373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.876406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.887885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fd640 00:24:52.635 [2024-07-12 12:01:41.889857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.889923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.897031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:52.635 [2024-07-12 12:01:41.898019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.898062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.909994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e6fa8 00:24:52.635 [2024-07-12 12:01:41.910978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.911007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.925633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ed0b0 00:24:52.635 [2024-07-12 12:01:41.927290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.927323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.935993] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ddc00 00:24:52.635 [2024-07-12 12:01:41.937018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.937061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.949009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f5be8 00:24:52.635 [2024-07-12 12:01:41.950270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.961025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fda78 00:24:52.635 [2024-07-12 12:01:41.962233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.962280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.974349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ecc78 00:24:52.635 [2024-07-12 12:01:41.975648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.975680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.987691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f7970 00:24:52.635 [2024-07-12 12:01:41.989158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:41.989204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:41.999571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e8d30 00:24:52.635 [2024-07-12 12:01:42.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.000572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.011051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e8088 00:24:52.635 [2024-07-12 12:01:42.012016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.012059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.024411] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f46d0 00:24:52.635 [2024-07-12 12:01:42.025510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.025542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.037659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ed0b0 00:24:52.635 [2024-07-12 12:01:42.039022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.039053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.050851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f92c0 00:24:52.635 [2024-07-12 12:01:42.052314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.052346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.064052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ebb98 00:24:52.635 [2024-07-12 12:01:42.065669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.065706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.077325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e7818 00:24:52.635 [2024-07-12 12:01:42.079175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.090526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f81e0 00:24:52.635 [2024-07-12 12:01:42.092481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.092514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.104045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f6890 00:24:52.635 [2024-07-12 12:01:42.106164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.106210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.113007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e88f8 00:24:52.635 [2024-07-12 12:01:42.114015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.114059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:52.635 [2024-07-12 12:01:42.125125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e4140 00:24:52.635 [2024-07-12 12:01:42.126109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.635 [2024-07-12 12:01:42.126137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.140913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f6020 00:24:52.894 [2024-07-12 12:01:42.142434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.142467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.151643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190dece0 00:24:52.894 [2024-07-12 12:01:42.152310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.152343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.164855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f7538 00:24:52.894 [2024-07-12 12:01:42.165700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.165732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.178136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eb328 00:24:52.894 [2024-07-12 12:01:42.179155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.179202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.190071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e7c50 00:24:52.894 [2024-07-12 12:01:42.191803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.191835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.200932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e3060 00:24:52.894 [2024-07-12 12:01:42.201728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.201759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.214128] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f4298 00:24:52.894 [2024-07-12 12:01:42.215134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.215178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.227023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f7538 00:24:52.894 [2024-07-12 12:01:42.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.228158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.242016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e9168 00:24:52.894 [2024-07-12 12:01:42.243532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.243567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.254011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e7c50 00:24:52.894 [2024-07-12 12:01:42.255321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.255354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.267173] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eee38 00:24:52.894 [2024-07-12 12:01:42.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.268894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.280439] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ed920 00:24:52.894 [2024-07-12 12:01:42.282276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.282308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.289499] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f1430 00:24:52.894 [2024-07-12 12:01:42.290338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.290370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.302724] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e1f80 00:24:52.894 [2024-07-12 12:01:42.303743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.303776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.316011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f31b8 00:24:52.894 [2024-07-12 12:01:42.317163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.317210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.329202] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e9168 00:24:52.894 [2024-07-12 12:01:42.330542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.330575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.342455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:52.894 [2024-07-12 12:01:42.344021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.344066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.355702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e1f80 00:24:52.894 [2024-07-12 12:01:42.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.357396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.368531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fa7d8 00:24:52.894 [2024-07-12 12:01:42.370239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.370271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:52.894 [2024-07-12 12:01:42.377812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f1868 00:24:52.894 [2024-07-12 12:01:42.378643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.894 [2024-07-12 12:01:42.378675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.390875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e1710 00:24:53.155 [2024-07-12 12:01:42.391777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.391815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.405938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f96f8 00:24:53.155 [2024-07-12 12:01:42.407335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.420476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e95a0 00:24:53.155 [2024-07-12 12:01:42.422537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.422570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.429480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fd640 00:24:53.155 [2024-07-12 12:01:42.430483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.430515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.442701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ddc00 00:24:53.155 [2024-07-12 12:01:42.443893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.443937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.455941] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef270 00:24:53.155 [2024-07-12 12:01:42.457284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.457316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.468777] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fc998 00:24:53.155 [2024-07-12 12:01:42.470152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.470196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.481107] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f0bc0 00:24:53.155 [2024-07-12 12:01:42.482457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.482489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.494428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fd208 00:24:53.155 [2024-07-12 12:01:42.495964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.495993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.507756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e01f8 00:24:53.155 [2024-07-12 12:01:42.509497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.509529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.521118] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e4140 00:24:53.155 [2024-07-12 12:01:42.523046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.523074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.534495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e84c0 00:24:53.155 [2024-07-12 12:01:42.536542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.543511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eaef0 00:24:53.155 [2024-07-12 12:01:42.544343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.544371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.555474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f4298 00:24:53.155 [2024-07-12 12:01:42.556354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.556386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.568365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190de038 00:24:53.155 [2024-07-12 12:01:42.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.569251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.583378] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e8d30 00:24:53.155 [2024-07-12 12:01:42.584952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.584981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.596285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f92c0 00:24:53.155 [2024-07-12 12:01:42.597327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.597360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.608642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ed0b0 00:24:53.155 [2024-07-12 12:01:42.610026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.610056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.621231] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eb760 00:24:53.155 [2024-07-12 12:01:42.622630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.622661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.634103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f7da8 00:24:53.155 [2024-07-12 12:01:42.635513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.635546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:53.155 [2024-07-12 12:01:42.646470] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f0788 00:24:53.155 [2024-07-12 12:01:42.647579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.155 [2024-07-12 12:01:42.647612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.658188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f6cc8 00:24:53.414 [2024-07-12 12:01:42.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.659280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.671422] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fdeb0 00:24:53.414 [2024-07-12 12:01:42.672645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.672676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.684390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f6890 00:24:53.414 [2024-07-12 12:01:42.685625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.685658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.696772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190dfdc0 00:24:53.414 [2024-07-12 12:01:42.697486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.697519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.711825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e0a68 00:24:53.414 [2024-07-12 12:01:42.713756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.713787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.725043] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f4b08 00:24:53.414 [2024-07-12 12:01:42.727112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.414 [2024-07-12 12:01:42.727149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:53.414 [2024-07-12 12:01:42.734015] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f1868 00:24:53.414 [2024-07-12 12:01:42.734906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.734938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.747208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190fcdd0 00:24:53.415 [2024-07-12 12:01:42.748271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.748302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.760410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f0788 00:24:53.415 [2024-07-12 12:01:42.761644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.761675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.774764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f96f8 00:24:53.415 [2024-07-12 12:01:42.776666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.787980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e99d8 00:24:53.415 [2024-07-12 12:01:42.790065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.796950] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef270 00:24:53.415 [2024-07-12 12:01:42.797826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.797858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.811317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f1ca0 00:24:53.415 [2024-07-12 12:01:42.812878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.812919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.824538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190e99d8 00:24:53.415 [2024-07-12 12:01:42.826292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.826323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.836102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190eaef0 00:24:53.415 [2024-07-12 12:01:42.837881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.837912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.846928] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190f6458 00:24:53.415 [2024-07-12 12:01:42.847786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.847816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.859764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190de470 00:24:53.415 [2024-07-12 12:01:42.860635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.860666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.874401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.415 [2024-07-12 12:01:42.874637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.888224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.415 [2024-07-12 12:01:42.888458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.415 [2024-07-12 12:01:42.902014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.415 [2024-07-12 12:01:42.902244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.415 [2024-07-12 12:01:42.902277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.672 [2024-07-12 12:01:42.916079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.672 [2024-07-12 12:01:42.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.672 [2024-07-12 12:01:42.916344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.672 [2024-07-12 12:01:42.929838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.672 [2024-07-12 12:01:42.930089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.672 [2024-07-12 12:01:42.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.672 [2024-07-12 12:01:42.943612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.672 [2024-07-12 12:01:42.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.672 [2024-07-12 12:01:42.943894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.672 [2024-07-12 12:01:42.957426] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.672 [2024-07-12 12:01:42.957670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.672 [2024-07-12 12:01:42.957703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:42.971195] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:42.971430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:42.971461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:42.984996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:42.985225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:42.985256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:42.998790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:42.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:42.999076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.012605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.012847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.012888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.026361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.026604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.026634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.040141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.040375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.040405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.053901] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.054106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.054136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.067693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.067927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.067964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.081547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.081790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.095295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.095545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.095577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.109086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.109325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.109357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.123070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.123305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.123336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.136810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.137051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.137081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.150567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.150802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.150833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.673 [2024-07-12 12:01:43.164443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.673 [2024-07-12 12:01:43.164685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.673 [2024-07-12 12:01:43.164715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.931 [2024-07-12 12:01:43.178308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.931 [2024-07-12 12:01:43.178545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.931 [2024-07-12 12:01:43.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.931 [2024-07-12 12:01:43.192096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.931 [2024-07-12 12:01:43.192329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.931 [2024-07-12 12:01:43.192369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.931 [2024-07-12 12:01:43.205899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.931 [2024-07-12 12:01:43.206143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.931 [2024-07-12 12:01:43.206174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.931 [2024-07-12 12:01:43.219671] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.931 [2024-07-12 12:01:43.219915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.931 [2024-07-12 12:01:43.219946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.931 [2024-07-12 12:01:43.233426] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.233667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.233697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.247183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.247431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.247463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.260917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.261146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.261177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.274640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.274924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.288399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.288660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.302130] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.302373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.302405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.315832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.316077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.329567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.329807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.329841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.343315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.343546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.343578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.357065] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.357336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.370821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.371028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.371058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.384533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.384773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.384805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.398363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.398606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.398638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:53.932 [2024-07-12 12:01:43.412140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:53.932 [2024-07-12 12:01:43.412373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.932 [2024-07-12 12:01:43.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.426084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.426322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.426354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.439952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.440200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.440231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.453762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.453976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.454016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.467520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.467752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.467783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.481299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.481529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.481561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.495055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.495286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.495318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.508791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.509037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.509069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.522555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.522785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.522816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.536302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.536543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.536575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.550070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.550298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.550335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.563825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.564066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.564097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.577560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.577798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.577827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.591328] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.591559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.591590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.605069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.605310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.605342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.618782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.192 [2024-07-12 12:01:43.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.192 [2024-07-12 12:01:43.619072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.192 [2024-07-12 12:01:43.632515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.193 [2024-07-12 12:01:43.632748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.193 [2024-07-12 12:01:43.632780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.193 [2024-07-12 12:01:43.646217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.193 [2024-07-12 12:01:43.646449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.193 [2024-07-12 12:01:43.646481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.193 [2024-07-12 12:01:43.659918] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.193 [2024-07-12 12:01:43.660116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.193 [2024-07-12 12:01:43.660146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.193 [2024-07-12 12:01:43.673622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.193 [2024-07-12 12:01:43.673877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.193 [2024-07-12 12:01:43.673910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.687493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.687745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.687777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.701312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.701546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.701589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.715096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.715329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.715362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.728854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.729096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.729129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.742545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.742779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.742819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.756280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.756543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 [2024-07-12 12:01:43.769983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d99be0) with pdu=0x2000190ef6a8 00:24:54.451 [2024-07-12 12:01:43.770211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.451 [2024-07-12 12:01:43.770242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.451 00:24:54.451 Latency(us) 00:24:54.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.451 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.451 nvme0n1 : 2.01 19339.95 75.55 0.00 0.00 6602.69 2973.39 16990.81 00:24:54.451 =================================================================================================================== 00:24:54.451 Total : 19339.95 75.55 0.00 0.00 6602.69 2973.39 16990.81 00:24:54.451 0 00:24:54.451 12:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:54.451 12:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:54.451 12:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:54.451 12:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:54.451 | .driver_specific 00:24:54.451 | .nvme_error 00:24:54.451 | .status_code 00:24:54.451 | .command_transient_transport_error' 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1024562 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1024562 ']' 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1024562 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1024562 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1024562' 00:24:54.709 killing process with pid 1024562 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1024562 00:24:54.709 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.709 00:24:54.709 Latency(us) 00:24:54.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.709 =================================================================================================================== 00:24:54.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.709 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1024562 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1025078 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1025078 /var/tmp/bperf.sock 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1025078 ']' 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:54.966 12:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.966 [2024-07-12 12:01:44.391918] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:24:54.966 [2024-07-12 12:01:44.392013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025078 ] 00:24:54.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:54.966 Zero copy mechanism will not be used. 00:24:54.966 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.966 [2024-07-12 12:01:44.456039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.225 [2024-07-12 12:01:44.575420] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.161 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.424 nvme0n1 00:24:56.424 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:56.424 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.424 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.685 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.685 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:56.685 12:01:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:56.685 Zero copy mechanism will not be used. 00:24:56.685 Running I/O for 2 seconds... 00:24:56.685 [2024-07-12 12:01:46.044295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.685 [2024-07-12 12:01:46.044651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.685 [2024-07-12 12:01:46.044698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.685 [2024-07-12 12:01:46.050573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.685 [2024-07-12 12:01:46.050913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.685 [2024-07-12 12:01:46.050959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.685 [2024-07-12 12:01:46.057278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.685 [2024-07-12 12:01:46.057608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.685 [2024-07-12 12:01:46.057643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.063784] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.064138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.064188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.069322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.069640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.069673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.074884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.075246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.080369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.080700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.080733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.085719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.086043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.086073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.091130] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.091459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.091493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.096618] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.096953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.103919] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.104249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.104283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.110736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.111104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.116640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.116974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.117003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.122481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.122795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.122827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.128271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.128541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.128571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.135080] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.135412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.135445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.142211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.142534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.142567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.149012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.149239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.155362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.155678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.155711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.161921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.162190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.162221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.169422] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.169754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.686 [2024-07-12 12:01:46.176684] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.686 [2024-07-12 12:01:46.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.686 [2024-07-12 12:01:46.177085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.183946] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.184285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.184318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.190552] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.190912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.190958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.196989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.197324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.197357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.203568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.203907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.203954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.209753] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.210090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.210121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.945 [2024-07-12 12:01:46.216282] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.945 [2024-07-12 12:01:46.216583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.945 [2024-07-12 12:01:46.216622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.222064] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.222435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.229513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.229829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.229862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.235878] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.236254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.236287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.241740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.242050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.242081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.247824] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.248218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.253780] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.254099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.254131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.259505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.259823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.259856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.264840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.265205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.270020] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.270347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.270381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.275315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.275628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.275662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.281847] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.282158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.282192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.287829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.288164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.288197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.293497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.293813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.293845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.299691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.300033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.300076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.306126] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.306443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.306476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.311547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.311861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.311901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.316820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.317144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.317175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.322151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.322464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.322496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.327897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.328228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.328261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.333728] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.334087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.339117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.339468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.339502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.344467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.344782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.344825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.349805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.350124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.350161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.355246] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.355594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.360651] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.361007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.361040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.366004] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.366320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.371342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.371685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.371719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.376752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.377077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.946 [2024-07-12 12:01:46.382325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.946 [2024-07-12 12:01:46.382639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.946 [2024-07-12 12:01:46.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.389228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.389576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.389608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.394587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.394912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.399988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.400302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.400333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.405337] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.405651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.405683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.410670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.411024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.416028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.416349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.416382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.421370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.421682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.421714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.426645] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.426965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.427008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.947 [2024-07-12 12:01:46.432047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:56.947 [2024-07-12 12:01:46.432404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-07-12 12:01:46.432437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.438535] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.438895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.438928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.443875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.444203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.449180] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.449492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.449524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.454528] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.454842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.454883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.460972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.461289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.461321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.467313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.467630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.467664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.473260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.473594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.473636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.479064] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.479422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.479453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.484588] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.484668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.484698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.491480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.491795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.491827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.498436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.498798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.504899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.505214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.505246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.511748] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.512073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.512106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.518199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.518532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.518572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.524384] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.524716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.524750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.530520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.530879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.535987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.536302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.536334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.541383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.541697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.541729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.208 [2024-07-12 12:01:46.546742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.208 [2024-07-12 12:01:46.547062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.208 [2024-07-12 12:01:46.547095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.552037] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.552353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.552384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.557316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.557633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.557665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.562638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.562960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.562992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.568100] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.568416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.568448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.573454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.573767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.573800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.578803] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.579127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.579159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.584714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.585036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.585069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.591754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.592111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.592143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.597933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.603823] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.604185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.604217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.610244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.610615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.617136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.617451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.617489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.623462] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.623798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.623830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.630031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.630416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.636789] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.637149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.644049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.644411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.644443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.651313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.651662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.651695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.658801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.659105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.659139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.664979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.209 [2024-07-12 12:01:46.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.209 [2024-07-12 12:01:46.665329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.209 [2024-07-12 12:01:46.671183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.210 [2024-07-12 12:01:46.671532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.210 [2024-07-12 12:01:46.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.210 [2024-07-12 12:01:46.677831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.210 [2024-07-12 12:01:46.678166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.210 [2024-07-12 12:01:46.678200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.210 [2024-07-12 12:01:46.684595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.210 [2024-07-12 12:01:46.684903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.210 [2024-07-12 12:01:46.684936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.210 [2024-07-12 12:01:46.690899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.210 [2024-07-12 12:01:46.691217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.210 [2024-07-12 12:01:46.691250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.210 [2024-07-12 12:01:46.697410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.210 [2024-07-12 12:01:46.697742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.210 [2024-07-12 12:01:46.697775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.704074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.704478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.704511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.710314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.710611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.710644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.716955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.717295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.717328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.723755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.724063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.724097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.730836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.731147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.731180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.738548] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.738952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.738986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.745106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.745404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.745437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.750553] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.750833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.750874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.757096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.757380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.757414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.762434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.762715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.762749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.471 [2024-07-12 12:01:46.767459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.471 [2024-07-12 12:01:46.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.471 [2024-07-12 12:01:46.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.772762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.773050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.773084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.777907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.778191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.778224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.782985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.783268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.783308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.787953] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.788235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.788269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.792970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.793251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.793284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.798025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.798306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.803096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.803411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.808115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.808411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.808445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.813228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.813543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.818759] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.819050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.819083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.824300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.824583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.824616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.829304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.829593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.829628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.834361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.834644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.834677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.839380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.839661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.844504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.844785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.844818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.850413] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.850696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.850729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.855556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.855840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.855879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.860586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.860874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.860906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.865574] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.865854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.871018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.871302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.871335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.876460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.876741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.876774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.881442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.881724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.881756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.886504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.886784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.886817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.891518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.891802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.891835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.897361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.897641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.902542] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.902824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.902857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.907521] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.907801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.907835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.912550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.912833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.472 [2024-07-12 12:01:46.912874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.472 [2024-07-12 12:01:46.917980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.472 [2024-07-12 12:01:46.918263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.918302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.923451] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.923732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.923766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.928441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.928722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.928756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.933453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.933738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.933771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.938510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.938791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.938824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.944497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.944777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.944809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.949614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.949905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.949939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.954599] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.954887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.954919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.473 [2024-07-12 12:01:46.959626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.473 [2024-07-12 12:01:46.959915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.473 [2024-07-12 12:01:46.959948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.965900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.966192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.966225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.971725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.972056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.972089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.978559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.978888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.978920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.984679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.984971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.990091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.990376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.990409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:46.995637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:46.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:46.995964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.000952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.001237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.001270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.006530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.006813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.006846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.012741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.013066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.013099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.018970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.019257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.019290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.024942] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.025228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.025261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.030817] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.031105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.031139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.036671] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.036963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.042371] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.734 [2024-07-12 12:01:47.042652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.734 [2024-07-12 12:01:47.042685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.734 [2024-07-12 12:01:47.047416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.047729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.047760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.052434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.052750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.057461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.057744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.057776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.062486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.062773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.062806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.067538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.067820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.067854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.072933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.073215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.073248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.078582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.078863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.078903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.083590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.083879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.083912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.088618] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.088939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.088972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.093754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.094109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.098902] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.099187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.099220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.103967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.104250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.104284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.108935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.109218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.109251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.114600] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.114889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.114923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.120663] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.120953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.120987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.126528] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.126806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.126839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.132153] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.132437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.132470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.138453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.138763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.138796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.144691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.144981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.145015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.150957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.151240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.151273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.157757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.158047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.158088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.164818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.165111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.165145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.171711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.172001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.178595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.178953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.185774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.186074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.186107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.193296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.193601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.193634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.200403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.200698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.200733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.207887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.208174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.208207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.735 [2024-07-12 12:01:47.215205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.735 [2024-07-12 12:01:47.215599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.735 [2024-07-12 12:01:47.215631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.736 [2024-07-12 12:01:47.222688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.736 [2024-07-12 12:01:47.223080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.736 [2024-07-12 12:01:47.223115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.994 [2024-07-12 12:01:47.229864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.994 [2024-07-12 12:01:47.230159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.994 [2024-07-12 12:01:47.230192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.994 [2024-07-12 12:01:47.236078] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.994 [2024-07-12 12:01:47.236364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.994 [2024-07-12 12:01:47.236397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.994 [2024-07-12 12:01:47.242116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.994 [2024-07-12 12:01:47.242403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.994 [2024-07-12 12:01:47.242437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.994 [2024-07-12 12:01:47.247231] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.994 [2024-07-12 12:01:47.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.994 [2024-07-12 12:01:47.247548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.994 [2024-07-12 12:01:47.252279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.252564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.252597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.257443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.257725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.257760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.262527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.262808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.262841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.267644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.267965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.273264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.273559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.273592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.279346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.279627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.279661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.285399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.285695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.285729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.291227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.291440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.297093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.297481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.297515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.303236] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.303542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.303575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.309320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.309634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.309667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.315428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.315712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.315745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.322116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.322399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.322439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.328479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.328797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.334733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.335098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.335131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.341060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.341347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.341380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.347699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.348097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.348130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.354208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.354492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.354525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.359523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.359839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.365000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.365282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.365315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.370373] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.370656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.375776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.376071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.376104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.381826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.382180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.388676] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.388994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.389028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.395511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.395830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.401453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.401734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.401768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.406485] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.406767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.406800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.411452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.995 [2024-07-12 12:01:47.411737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.995 [2024-07-12 12:01:47.411770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.995 [2024-07-12 12:01:47.416494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.416774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.416808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.421763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.422054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.422088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.427858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.428150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.428183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.433943] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.434230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.434264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.439971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.440254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.440287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.446042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.446323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.446357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.452152] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.452526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.452559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.459163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.459445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.459478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.464491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.464773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.464805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.469505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.469785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.469817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.474606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.474893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.474932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.479652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.479941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.996 [2024-07-12 12:01:47.485360] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:57.996 [2024-07-12 12:01:47.485640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.996 [2024-07-12 12:01:47.485672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.491708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.492011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.492044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.497211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.497492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.497525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.502356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.502638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.502671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.507529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.507810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.507844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.513215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.513494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.513529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.519364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.519655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.519688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.526349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.526646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.526680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.533587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.533955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.541080] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.541381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.547334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.553005] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.553319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.559027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.559311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.559343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.565348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.565642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.565674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.571627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.571917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.571949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.578765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.579163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.579203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.585346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.585627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.585660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.590798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.591086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.591118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.596018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.596303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.596336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.601435] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.601756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.607436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.607785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.613662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.613975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.614008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.620543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.620835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.620876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.627484] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.627860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.627912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.633962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.634250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.634283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.257 [2024-07-12 12:01:47.640314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.257 [2024-07-12 12:01:47.640596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.257 [2024-07-12 12:01:47.640629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.646395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.646676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.646708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.652098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.652381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.652413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.657230] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.657521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.657553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.662357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.662637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.662669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.667446] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.667726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.667758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.672512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.672792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.672824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.677630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.677930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.677962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.682807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.683094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.683136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.688391] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.688672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.688705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.693966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.694283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.699073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.699362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.699395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.704077] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.704359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.709176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.715053] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.715334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.715366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.720226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.720507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.720539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.725378] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.725659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.725700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.730494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.730807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.736203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.736487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.736520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.741691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.742021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.258 [2024-07-12 12:01:47.746909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.258 [2024-07-12 12:01:47.747192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.258 [2024-07-12 12:01:47.747225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.752092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.752376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.752409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.757476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.757758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.757791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.763278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.763558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.763591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.768340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.768623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.768655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.773366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.773650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.773683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.778377] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.778690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.783416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.783730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.788792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.789081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.789115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.795208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.795491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.795524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.800957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.801240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.801273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.806636] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.806926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.806959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.812316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.812599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.818171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.818452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.823969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.824252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.824284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.829913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.830196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.830229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.835798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.836086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.836119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.841493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.518 [2024-07-12 12:01:47.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.518 [2024-07-12 12:01:47.841805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.518 [2024-07-12 12:01:47.847192] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.847474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.847507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.853106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.853388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.853421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.858782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.859068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.864890] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.865173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.865205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.871027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.871311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.876092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.876374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.876408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.881176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.881493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.886356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.886635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.886669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.892470] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.892778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.892811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.898592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.898917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.898951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.904892] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.905207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.911801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.912086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.912120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.918464] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.918770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.918802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.925707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.926001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.926035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.932930] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.933323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.933355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.940654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.941019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.941053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.947573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.947854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.947895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.953259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.953542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.953574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.958292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.958573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.958606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.963465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.963747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.963781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.969394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.969755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.969788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.975497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.975779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.975818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.980636] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.980940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.980974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.985766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.986086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.990965] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.991283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:47.996077] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:47.996359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:47.996393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:48.001650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:48.001941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:48.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.519 [2024-07-12 12:01:48.007861] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.519 [2024-07-12 12:01:48.008156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.519 [2024-07-12 12:01:48.008190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.014297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.014593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.014626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.020784] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.021072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.021106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.026487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.026777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.026812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.032112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.032396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.032428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.037352] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.037636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.037669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.777 [2024-07-12 12:01:48.043010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9a050) with pdu=0x2000190fef90 00:24:58.777 [2024-07-12 12:01:48.043175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.777 [2024-07-12 12:01:48.043206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.777 00:24:58.777 Latency(us) 00:24:58.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.777 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:58.777 nvme0n1 : 2.00 5297.05 662.13 0.00 0.00 3012.53 1808.31 7864.32 00:24:58.777 =================================================================================================================== 00:24:58.777 Total : 5297.05 662.13 0.00 0.00 3012.53 1808.31 7864.32 00:24:58.777 0 00:24:58.777 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:58.777 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:58.777 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:58.777 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:58.777 | .driver_specific 00:24:58.777 | .nvme_error 00:24:58.777 | .status_code 00:24:58.777 | .command_transient_transport_error' 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 342 > 0 )) 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1025078 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1025078 ']' 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1025078 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1025078 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:59.036 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1025078' 00:24:59.036 killing process with pid 1025078 00:24:59.037 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1025078 00:24:59.037 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.037 00:24:59.037 Latency(us) 00:24:59.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.037 =================================================================================================================== 00:24:59.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.037 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1025078 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1023582 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1023582 ']' 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1023582 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1023582 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1023582' 00:24:59.296 killing process with pid 1023582 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1023582 00:24:59.296 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1023582 00:24:59.555 00:24:59.555 real 0m16.919s 00:24:59.555 user 0m33.949s 00:24:59.555 sys 0m4.305s 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.555 ************************************ 00:24:59.555 END TEST nvmf_digest_error 00:24:59.555 ************************************ 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.555 12:01:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.555 rmmod nvme_tcp 00:24:59.555 rmmod nvme_fabrics 00:24:59.555 rmmod nvme_keyring 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1023582 ']' 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1023582 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1023582 ']' 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1023582 00:24:59.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1023582) - No such process 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1023582 is not found' 00:24:59.555 Process with pid 1023582 is not found 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.555 12:01:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.093 12:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.093 00:25:02.093 real 0m40.733s 00:25:02.093 user 1m13.185s 00:25:02.093 sys 0m10.550s 00:25:02.093 12:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:02.093 12:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:02.093 ************************************ 00:25:02.093 END TEST nvmf_digest 00:25:02.093 ************************************ 00:25:02.093 12:01:51 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:25:02.093 12:01:51 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:25:02.094 12:01:51 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:25:02.094 12:01:51 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:02.094 12:01:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:02.094 12:01:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:02.094 12:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.094 ************************************ 00:25:02.094 START TEST nvmf_bdevperf 00:25:02.094 ************************************ 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:02.094 * Looking for test storage... 00:25:02.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.094 12:01:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:04.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.059 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:04.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:04.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:04.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:04.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:25:04.060 00:25:04.060 --- 10.0.0.2 ping statistics --- 00:25:04.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.060 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:25:04.060 00:25:04.060 --- 10.0.0.1 ping statistics --- 00:25:04.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.060 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1027438 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1027438 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1027438 ']' 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:04.060 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.060 [2024-07-12 12:01:53.340561] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:04.060 [2024-07-12 12:01:53.340648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.060 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.060 [2024-07-12 12:01:53.410618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:04.060 [2024-07-12 12:01:53.520904] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.060 [2024-07-12 12:01:53.520952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.060 [2024-07-12 12:01:53.520966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.060 [2024-07-12 12:01:53.520978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.060 [2024-07-12 12:01:53.520988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.060 [2024-07-12 12:01:53.521049] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.060 [2024-07-12 12:01:53.522659] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.060 [2024-07-12 12:01:53.522699] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 [2024-07-12 12:01:53.674199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 Malloc0 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:04.319 [2024-07-12 12:01:53.739359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:04.319 { 00:25:04.319 "params": { 00:25:04.319 "name": "Nvme$subsystem", 00:25:04.319 "trtype": "$TEST_TRANSPORT", 00:25:04.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.319 "adrfam": "ipv4", 00:25:04.319 "trsvcid": "$NVMF_PORT", 00:25:04.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.319 "hdgst": ${hdgst:-false}, 00:25:04.319 "ddgst": ${ddgst:-false} 00:25:04.319 }, 00:25:04.319 "method": "bdev_nvme_attach_controller" 00:25:04.319 } 00:25:04.319 EOF 00:25:04.319 )") 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:04.319 12:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:04.319 "params": { 00:25:04.319 "name": "Nvme1", 00:25:04.319 "trtype": "tcp", 00:25:04.319 "traddr": "10.0.0.2", 00:25:04.319 "adrfam": "ipv4", 00:25:04.319 "trsvcid": "4420", 00:25:04.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.319 "hdgst": false, 00:25:04.319 "ddgst": false 00:25:04.319 }, 00:25:04.319 "method": "bdev_nvme_attach_controller" 00:25:04.319 }' 00:25:04.319 [2024-07-12 12:01:53.788140] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:04.319 [2024-07-12 12:01:53.788232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027587 ] 00:25:04.578 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.578 [2024-07-12 12:01:53.848821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.578 [2024-07-12 12:01:53.960903] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.838 Running I/O for 1 seconds... 00:25:06.211 00:25:06.211 Latency(us) 00:25:06.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.211 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.211 Verification LBA range: start 0x0 length 0x4000 00:25:06.211 Nvme1n1 : 1.01 8636.21 33.74 0.00 0.00 14757.39 2815.62 16019.91 00:25:06.211 =================================================================================================================== 00:25:06.211 Total : 8636.21 33.74 0.00 0.00 14757.39 2815.62 16019.91 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1027741 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:06.211 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:06.212 { 00:25:06.212 "params": { 00:25:06.212 "name": "Nvme$subsystem", 00:25:06.212 "trtype": "$TEST_TRANSPORT", 00:25:06.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.212 "adrfam": "ipv4", 00:25:06.212 "trsvcid": "$NVMF_PORT", 00:25:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.212 "hdgst": ${hdgst:-false}, 00:25:06.212 "ddgst": ${ddgst:-false} 00:25:06.212 }, 00:25:06.212 "method": "bdev_nvme_attach_controller" 00:25:06.212 } 00:25:06.212 EOF 00:25:06.212 )") 00:25:06.212 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:06.212 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:06.212 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:06.212 12:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:06.212 "params": { 00:25:06.212 "name": "Nvme1", 00:25:06.212 "trtype": "tcp", 00:25:06.212 "traddr": "10.0.0.2", 00:25:06.212 "adrfam": "ipv4", 00:25:06.212 "trsvcid": "4420", 00:25:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.212 "hdgst": false, 00:25:06.212 "ddgst": false 00:25:06.212 }, 00:25:06.212 "method": "bdev_nvme_attach_controller" 00:25:06.212 }' 00:25:06.212 [2024-07-12 12:01:55.614318] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:06.212 [2024-07-12 12:01:55.614405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027741 ] 00:25:06.212 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.212 [2024-07-12 12:01:55.678961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.471 [2024-07-12 12:01:55.787893] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.730 Running I/O for 15 seconds... 00:25:09.267 12:01:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1027438 00:25:09.267 12:01:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:09.267 [2024-07-12 12:01:58.583210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.583556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.583967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.583984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.584017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.584047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.584076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.584106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.267 [2024-07-12 12:01:58.584135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.267 [2024-07-12 12:01:58.584599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.267 [2024-07-12 12:01:58.584615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.268 [2024-07-12 12:01:58.584882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.584933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.584963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.584979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.584993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.585970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.585984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.586000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.586014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.268 [2024-07-12 12:01:58.586029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.268 [2024-07-12 12:01:58.586043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.269 [2024-07-12 12:01:58.586799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.269 [2024-07-12 12:01:58.586832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.586980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.586996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.269 [2024-07-12 12:01:58.587467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.269 [2024-07-12 12:01:58.587484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.270 [2024-07-12 12:01:58.587499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.270 [2024-07-12 12:01:58.587531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.270 [2024-07-12 12:01:58.587564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.270 [2024-07-12 12:01:58.587601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ebbe0 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.587635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:09.270 [2024-07-12 12:01:58.587647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:09.270 [2024-07-12 12:01:58.587660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48960 len:8 PRP1 0x0 PRP2 0x0 00:25:09.270 [2024-07-12 12:01:58.587675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587740] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25ebbe0 was disconnected and freed. reset controller. 00:25:09.270 [2024-07-12 12:01:58.587820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.270 [2024-07-12 12:01:58.587844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.270 [2024-07-12 12:01:58.587884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.270 [2024-07-12 12:01:58.587931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.270 [2024-07-12 12:01:58.587958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.270 [2024-07-12 12:01:58.587970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.591804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.591845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.592588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.592640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.592660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.592924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.593144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.593182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.593200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.596804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.605925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.606309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.606341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.606359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.606598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.606841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.606876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.606894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.610469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.619971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.620378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.620409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.620427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.620666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.620920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.620946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.620963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.624540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.633842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.634224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.634256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.634274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.634514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.634757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.634782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.634798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.638381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.647886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.648281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.648313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.648331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.648576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.648821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.648846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.648862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.652451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.661734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.662112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.662143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.662161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.662400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.662643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.662668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.662684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.666270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.675758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.676174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.676206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.676224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.676463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.676707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.676732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.676748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.680334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.270 [2024-07-12 12:01:58.689783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.270 [2024-07-12 12:01:58.690202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.270 [2024-07-12 12:01:58.690235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.270 [2024-07-12 12:01:58.690253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.270 [2024-07-12 12:01:58.690492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.270 [2024-07-12 12:01:58.690735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.270 [2024-07-12 12:01:58.690760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.270 [2024-07-12 12:01:58.690782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.270 [2024-07-12 12:01:58.694367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.271 [2024-07-12 12:01:58.703651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.271 [2024-07-12 12:01:58.704053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.271 [2024-07-12 12:01:58.704085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.271 [2024-07-12 12:01:58.704103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.271 [2024-07-12 12:01:58.704342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.271 [2024-07-12 12:01:58.704587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.271 [2024-07-12 12:01:58.704612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.271 [2024-07-12 12:01:58.704628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.271 [2024-07-12 12:01:58.708211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.271 [2024-07-12 12:01:58.717492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.271 [2024-07-12 12:01:58.717882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.271 [2024-07-12 12:01:58.717914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.271 [2024-07-12 12:01:58.717932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.271 [2024-07-12 12:01:58.718171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.271 [2024-07-12 12:01:58.718415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.271 [2024-07-12 12:01:58.718440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.271 [2024-07-12 12:01:58.718457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.271 [2024-07-12 12:01:58.722038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.271 [2024-07-12 12:01:58.731534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.271 [2024-07-12 12:01:58.731911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.271 [2024-07-12 12:01:58.731943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.271 [2024-07-12 12:01:58.731961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.271 [2024-07-12 12:01:58.732200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.271 [2024-07-12 12:01:58.732444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.271 [2024-07-12 12:01:58.732468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.271 [2024-07-12 12:01:58.732485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.271 [2024-07-12 12:01:58.736066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.271 [2024-07-12 12:01:58.745543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.271 [2024-07-12 12:01:58.745922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.271 [2024-07-12 12:01:58.745959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.271 [2024-07-12 12:01:58.745979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.271 [2024-07-12 12:01:58.746218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.271 [2024-07-12 12:01:58.746462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.271 [2024-07-12 12:01:58.746487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.271 [2024-07-12 12:01:58.746502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.271 [2024-07-12 12:01:58.750082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.759507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.759925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.539 [2024-07-12 12:01:58.759956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.539 [2024-07-12 12:01:58.759973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.539 [2024-07-12 12:01:58.760226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.539 [2024-07-12 12:01:58.760471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.539 [2024-07-12 12:01:58.760496] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.539 [2024-07-12 12:01:58.760512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.539 [2024-07-12 12:01:58.764094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.773429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.773806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.539 [2024-07-12 12:01:58.773839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.539 [2024-07-12 12:01:58.773857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.539 [2024-07-12 12:01:58.774107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.539 [2024-07-12 12:01:58.774351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.539 [2024-07-12 12:01:58.774376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.539 [2024-07-12 12:01:58.774392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.539 [2024-07-12 12:01:58.777969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.787449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.787851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.539 [2024-07-12 12:01:58.787889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.539 [2024-07-12 12:01:58.787908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.539 [2024-07-12 12:01:58.788147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.539 [2024-07-12 12:01:58.788397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.539 [2024-07-12 12:01:58.788423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.539 [2024-07-12 12:01:58.788439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.539 [2024-07-12 12:01:58.792020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.801299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.801685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.539 [2024-07-12 12:01:58.801717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.539 [2024-07-12 12:01:58.801735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.539 [2024-07-12 12:01:58.801985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.539 [2024-07-12 12:01:58.802230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.539 [2024-07-12 12:01:58.802255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.539 [2024-07-12 12:01:58.802271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.539 [2024-07-12 12:01:58.805841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.815374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.815764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.539 [2024-07-12 12:01:58.815795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.539 [2024-07-12 12:01:58.815814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.539 [2024-07-12 12:01:58.816064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.539 [2024-07-12 12:01:58.816309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.539 [2024-07-12 12:01:58.816334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.539 [2024-07-12 12:01:58.816351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.539 [2024-07-12 12:01:58.819927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.539 [2024-07-12 12:01:58.829411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.539 [2024-07-12 12:01:58.829803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.829835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.829853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.830100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.830346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.830371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.830387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.833972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.843253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.843652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.843684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.843701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.843951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.844195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.844221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.844237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.847805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.857086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.857460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.857491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.857509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.857747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.858003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.858028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.858044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.861612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.871098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.871457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.871488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.871506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.871745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.872000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.872025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.872041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.875609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.885094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.885492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.885524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.885547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.885788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.886042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.886068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.886084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.889655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.899141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.899536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.899568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.899586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.899825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.540 [2024-07-12 12:01:58.900079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.540 [2024-07-12 12:01:58.900105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.540 [2024-07-12 12:01:58.900121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.540 [2024-07-12 12:01:58.903691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.540 [2024-07-12 12:01:58.913176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.540 [2024-07-12 12:01:58.913540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.540 [2024-07-12 12:01:58.913571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.540 [2024-07-12 12:01:58.913589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.540 [2024-07-12 12:01:58.913827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.914080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.914106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.914123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.917692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.927183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.541 [2024-07-12 12:01:58.927577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.541 [2024-07-12 12:01:58.927595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.541 [2024-07-12 12:01:58.927834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.928086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.928117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.928134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.931704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.941187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.941576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.541 [2024-07-12 12:01:58.941607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.541 [2024-07-12 12:01:58.941625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.541 [2024-07-12 12:01:58.941863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.942118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.942143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.942159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.945726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.955218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.955621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.541 [2024-07-12 12:01:58.955652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.541 [2024-07-12 12:01:58.955670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.541 [2024-07-12 12:01:58.955917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.956161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.956186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.956202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.959768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.969262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.969648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.541 [2024-07-12 12:01:58.969680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.541 [2024-07-12 12:01:58.969698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.541 [2024-07-12 12:01:58.969946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.970191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.970217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.970233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.973802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.983124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.983540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.541 [2024-07-12 12:01:58.983571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.541 [2024-07-12 12:01:58.983589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.541 [2024-07-12 12:01:58.983828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.541 [2024-07-12 12:01:58.984080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-07-12 12:01:58.984106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-07-12 12:01:58.984122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-07-12 12:01:58.987693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.541 [2024-07-12 12:01:58.996982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-07-12 12:01:58.997389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.542 [2024-07-12 12:01:58.997437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.542 [2024-07-12 12:01:58.997456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.542 [2024-07-12 12:01:58.997695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.542 [2024-07-12 12:01:58.997948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.542 [2024-07-12 12:01:58.997974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.542 [2024-07-12 12:01:58.997990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.542 [2024-07-12 12:01:59.001561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.542 [2024-07-12 12:01:59.010842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.542 [2024-07-12 12:01:59.011249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.542 [2024-07-12 12:01:59.011281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.542 [2024-07-12 12:01:59.011299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.542 [2024-07-12 12:01:59.011539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.542 [2024-07-12 12:01:59.011783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.542 [2024-07-12 12:01:59.011808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.542 [2024-07-12 12:01:59.011824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.542 [2024-07-12 12:01:59.015405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.542 [2024-07-12 12:01:59.024763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.542 [2024-07-12 12:01:59.025192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.542 [2024-07-12 12:01:59.025224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.542 [2024-07-12 12:01:59.025242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.542 [2024-07-12 12:01:59.025493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.542 [2024-07-12 12:01:59.025738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.542 [2024-07-12 12:01:59.025762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.542 [2024-07-12 12:01:59.025779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.029391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.038719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.039075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.039107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.039130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.039368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.039621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.039647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.039663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.043267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.052767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.053127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.053159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.053178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.053417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.053661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.053686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.053702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.057289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.066794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.067173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.067214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.067233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.067472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.067716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.067741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.067763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.071347] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.080674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.081064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.081097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.081116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.081354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.081598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.081622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.081639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.085219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.094711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.095111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.095153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.095171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.095409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.095654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.095678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.095695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.099271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.108557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.108947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.108979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.108997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.109235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.109479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.109503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.109519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.113098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.122584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.122997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.123029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.123047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.806 [2024-07-12 12:01:59.123297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.806 [2024-07-12 12:01:59.123540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.806 [2024-07-12 12:01:59.123565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.806 [2024-07-12 12:01:59.123580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.806 [2024-07-12 12:01:59.127181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.806 [2024-07-12 12:01:59.136468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.806 [2024-07-12 12:01:59.136857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.806 [2024-07-12 12:01:59.136895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.806 [2024-07-12 12:01:59.136914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.137153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.137397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.137422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.137438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.141023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.150505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.150908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.150951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.150969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.151207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.151451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.151475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.151491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.155071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.164346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.164756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.164787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.164815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.165066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.165318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.165343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.165359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.168936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.178207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.178581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.178622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.178640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.178888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.179133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.179158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.179174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.182745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.192230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.192617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.192649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.192666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.192916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.193159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.193184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.193200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.196770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.206067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.206431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.206464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.206482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.206721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.206977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.207003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.207019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.210594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.220075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.220464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.220505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.220523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.220767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.221022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.221048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.221065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.224639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.233916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.234278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.234309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.234327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.234566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.234809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.234834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.234850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.238428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.247913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.248309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.248350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.248368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.248612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.248855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.248890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.248907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.252475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.261752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.262168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.262205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.262230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.262468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.262713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.262737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.262753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.266339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.275612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.276013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.807 [2024-07-12 12:01:59.276045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.807 [2024-07-12 12:01:59.276063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.807 [2024-07-12 12:01:59.276311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.807 [2024-07-12 12:01:59.276555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.807 [2024-07-12 12:01:59.276579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.807 [2024-07-12 12:01:59.276595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.807 [2024-07-12 12:01:59.280177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.807 [2024-07-12 12:01:59.289454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.807 [2024-07-12 12:01:59.289841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.808 [2024-07-12 12:01:59.289881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:09.808 [2024-07-12 12:01:59.289900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:09.808 [2024-07-12 12:01:59.290148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:09.808 [2024-07-12 12:01:59.290392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.808 [2024-07-12 12:01:59.290416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.808 [2024-07-12 12:01:59.290433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.808 [2024-07-12 12:01:59.294008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.067 [2024-07-12 12:01:59.303496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.067 [2024-07-12 12:01:59.303903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.067 [2024-07-12 12:01:59.303937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.067 [2024-07-12 12:01:59.303960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.067 [2024-07-12 12:01:59.304200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.067 [2024-07-12 12:01:59.304453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.067 [2024-07-12 12:01:59.304478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.067 [2024-07-12 12:01:59.304495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.308079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.317353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.317745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.317784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.317802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.318057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.318303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.318327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.318343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.321919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.331194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.331587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.331620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.331638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.331888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.332133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.332159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.332175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.335743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.345235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.345600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.345631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.345650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.345900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.346145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.346170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.346187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.349755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.359243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.359699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.359752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.359771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.360018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.360263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.360287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.360304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.363883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.373157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.373520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.373552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.373570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.373808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.374062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.374088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.374104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.377674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.387162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.387568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.387599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.387617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.387855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.388136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.388162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.388178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.391745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.401021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.401408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.401439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.401463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.401702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.401958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.401984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.402001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.405571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.414847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.415238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.415269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.415287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.415525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.415768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.415792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.415808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.419387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.428877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.429265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.429307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.429325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.429568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.429811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.429836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.429852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.433432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.442704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.068 [2024-07-12 12:01:59.443109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.068 [2024-07-12 12:01:59.443141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.068 [2024-07-12 12:01:59.443159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.068 [2024-07-12 12:01:59.443399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.068 [2024-07-12 12:01:59.443643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.068 [2024-07-12 12:01:59.443675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.068 [2024-07-12 12:01:59.443692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.068 [2024-07-12 12:01:59.447279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.068 [2024-07-12 12:01:59.456612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.457026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.457058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.457087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.457325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.457569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.457594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.457610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.461211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.470493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.470884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.470922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.470940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.471179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.471424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.471450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.471466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.475053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.484338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.484734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.484767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.484785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.485038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.485282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.485308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.485325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.488905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.498182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.498577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.498609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.498628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.498880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.499126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.499152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.499168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.502741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.512025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.512401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.512434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.512452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.512691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.512950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.512977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.512993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.516590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.525879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.526277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.526310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.526328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.526568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.526813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.526840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.526856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.530442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.539716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.540093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.540125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.540143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.540389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.540632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.540658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.540675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.544262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.069 [2024-07-12 12:01:59.553758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.069 [2024-07-12 12:01:59.554160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.069 [2024-07-12 12:01:59.554192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.069 [2024-07-12 12:01:59.554211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.069 [2024-07-12 12:01:59.554450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.069 [2024-07-12 12:01:59.554693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.069 [2024-07-12 12:01:59.554719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.069 [2024-07-12 12:01:59.554735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.069 [2024-07-12 12:01:59.558408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.329 [2024-07-12 12:01:59.567823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.568234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.568267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.568285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.568525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.568768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.568794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.568810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.572402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.581701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.582114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.582147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.582165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.582404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.582647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.582673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.582695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.586286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.595582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.595980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.596012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.596031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.596270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.596515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.596541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.596557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.600156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.609430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.609821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.609853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.609880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.610121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.610364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.610390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.610407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.614005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.623308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.623726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.623775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.623794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.624047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.624291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.624316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.624332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.627911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.637196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.637590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.637623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.637641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.637893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.638149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.638175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.638191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.641765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.651057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.651527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.651578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.651596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.330 [2024-07-12 12:01:59.651834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.330 [2024-07-12 12:01:59.652091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.330 [2024-07-12 12:01:59.652119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.330 [2024-07-12 12:01:59.652135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.330 [2024-07-12 12:01:59.655707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.330 [2024-07-12 12:01:59.664995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.330 [2024-07-12 12:01:59.665460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.330 [2024-07-12 12:01:59.665491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.330 [2024-07-12 12:01:59.665509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.665748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.666004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.666031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.666047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.669617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.678899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.679336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.679367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.679385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.679624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.679884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.679911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.679927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.683496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.692772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.693219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.693271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.693289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.693527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.693769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.693795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.693812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.697396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.706685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.707063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.707096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.707115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.707354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.707597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.707624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.707640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.711322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.720615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.720992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.721026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.721044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.721284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.721529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.721555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.721572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.725169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.734448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.734928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.734962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.734980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.735220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.735462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.735488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.735504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.739087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.748364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.748764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.748797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.748815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.749068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.749313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.749339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.749355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.752938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.762217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.762607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.762639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.762657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.762908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.763153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.763179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.763197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.766770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.776053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.776444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.776475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.331 [2024-07-12 12:01:59.776499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.331 [2024-07-12 12:01:59.776739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.331 [2024-07-12 12:01:59.776996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.331 [2024-07-12 12:01:59.777023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.331 [2024-07-12 12:01:59.777039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.331 [2024-07-12 12:01:59.780612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.331 [2024-07-12 12:01:59.789896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.331 [2024-07-12 12:01:59.790291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.331 [2024-07-12 12:01:59.790323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.332 [2024-07-12 12:01:59.790341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.332 [2024-07-12 12:01:59.790580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.332 [2024-07-12 12:01:59.790823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.332 [2024-07-12 12:01:59.790849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.332 [2024-07-12 12:01:59.790876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.332 [2024-07-12 12:01:59.794453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.332 [2024-07-12 12:01:59.803731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.332 [2024-07-12 12:01:59.804113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.332 [2024-07-12 12:01:59.804146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.332 [2024-07-12 12:01:59.804164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.332 [2024-07-12 12:01:59.804403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.332 [2024-07-12 12:01:59.804648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.332 [2024-07-12 12:01:59.804674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.332 [2024-07-12 12:01:59.804691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.332 [2024-07-12 12:01:59.808275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.332 [2024-07-12 12:01:59.817789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.332 [2024-07-12 12:01:59.818195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.332 [2024-07-12 12:01:59.818228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.332 [2024-07-12 12:01:59.818247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.332 [2024-07-12 12:01:59.818487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.332 [2024-07-12 12:01:59.818739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.332 [2024-07-12 12:01:59.818764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.332 [2024-07-12 12:01:59.818781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.332 [2024-07-12 12:01:59.822451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.831832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.832237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.832270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.832289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.832528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.832771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.832797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.832812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.836425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.845707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.846083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.846116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.846135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.846375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.846618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.846643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.846660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.850240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.859727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.860134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.860167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.860186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.860425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.860669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.860695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.860711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.864296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.873585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.873984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.874016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.874034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.874273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.874516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.874541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.874557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.878138] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.887621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.888019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.888051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.888069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.888307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.888550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.888576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.888593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.892175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.901660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.902057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.902088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.902106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.902345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.902588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.902614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.902630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.592 [2024-07-12 12:01:59.906216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.592 [2024-07-12 12:01:59.915697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.592 [2024-07-12 12:01:59.916098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.592 [2024-07-12 12:01:59.916130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.592 [2024-07-12 12:01:59.916153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.592 [2024-07-12 12:01:59.916392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.592 [2024-07-12 12:01:59.916636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.592 [2024-07-12 12:01:59.916662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.592 [2024-07-12 12:01:59.916678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.920262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.929545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:01:59.929947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:01:59.929981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:01:59.929999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:01:59.930238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:01:59.930481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:01:59.930507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:01:59.930524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.934105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.943590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:01:59.943973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:01:59.944006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:01:59.944024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:01:59.944264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:01:59.944507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:01:59.944534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:01:59.944550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.948132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.957620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:01:59.957999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:01:59.958032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:01:59.958050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:01:59.958288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:01:59.958531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:01:59.958563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:01:59.958580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.962164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.971646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:01:59.972043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:01:59.972075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:01:59.972092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:01:59.972331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:01:59.972574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:01:59.972600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:01:59.972617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.976202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.985686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:01:59.986083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:01:59.986115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:01:59.986133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:01:59.986371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:01:59.986613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:01:59.986639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:01:59.986656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:01:59.990239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:01:59.999722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.000104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.000136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.000155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.000394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.000637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:02:00.000663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:02:00.000679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:02:00.004296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:02:00.013582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.013982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.014015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.014034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.014274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.014517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:02:00.014543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:02:00.014560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:02:00.018155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:02:00.027615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.028059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.028096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.028116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.028358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.028604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:02:00.028630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:02:00.028647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:02:00.032233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:02:00.041530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.041901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.041935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.041956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.042195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.042441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:02:00.042467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:02:00.042483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:02:00.046068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:02:00.055571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.055921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.055955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.055974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.056221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.056466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.593 [2024-07-12 12:02:00.056492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.593 [2024-07-12 12:02:00.056508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.593 [2024-07-12 12:02:00.060088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.593 [2024-07-12 12:02:00.069580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.593 [2024-07-12 12:02:00.069939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.593 [2024-07-12 12:02:00.069972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.593 [2024-07-12 12:02:00.069991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.593 [2024-07-12 12:02:00.070230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.593 [2024-07-12 12:02:00.070475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.594 [2024-07-12 12:02:00.070500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.594 [2024-07-12 12:02:00.070516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.594 [2024-07-12 12:02:00.074100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.594 [2024-07-12 12:02:00.083683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.594 [2024-07-12 12:02:00.084058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.594 [2024-07-12 12:02:00.084092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.594 [2024-07-12 12:02:00.084110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.594 [2024-07-12 12:02:00.084349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.084594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.084620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.084636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.088255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.097653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.098024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.098058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.098077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.098316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.098560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.098584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.098608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.102224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.111531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.111908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.111942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.111962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.112201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.112446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.112471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.112488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.116075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.125579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.125950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.125991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.126010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.126249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.126492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.126517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.126533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.130122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.139644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.140023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.140056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.140074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.140312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.140557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.140582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.140597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.144184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.153688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.154045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.154082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.154101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.154340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.154584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.154609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.154625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.158209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.167707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.168086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.168118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.168136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.168374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.168618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.168643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.168660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.172245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.181744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.182143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.182175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.182192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.182431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.182675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.182699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.182715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.186299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.195585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.195967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.195999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.196017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.196256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.196506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.196531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.196547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.200136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.209461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.209843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.209883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.209904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.210151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.210396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.855 [2024-07-12 12:02:00.210421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.855 [2024-07-12 12:02:00.210437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.855 [2024-07-12 12:02:00.214016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.855 [2024-07-12 12:02:00.223344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.855 [2024-07-12 12:02:00.223710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.855 [2024-07-12 12:02:00.223742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.855 [2024-07-12 12:02:00.223760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.855 [2024-07-12 12:02:00.224016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.855 [2024-07-12 12:02:00.224261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.224287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.224303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.227891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.237387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.237777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.237809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.237827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.238077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.238322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.238347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.238363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.241956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.251243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.251632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.251663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.251681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.251932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.252176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.252201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.252217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.255790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.265094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.265466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.265499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.265518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.265758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.266017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.266043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.266059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.269635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.279141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.279539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.279570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.279588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.279826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.280083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.280109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.280126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.283718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.293012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.293377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.293410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.293434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.293675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.293930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.293956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.293972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.297542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.307059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.307427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.307461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.307479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.307720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.307977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.308002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.308018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.311587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.321098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.321496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.321529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.321548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.321787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.322044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.322070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.322086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.325663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.856 [2024-07-12 12:02:00.334950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.856 [2024-07-12 12:02:00.335349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.856 [2024-07-12 12:02:00.335383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:10.856 [2024-07-12 12:02:00.335401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:10.856 [2024-07-12 12:02:00.335641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:10.856 [2024-07-12 12:02:00.335897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.856 [2024-07-12 12:02:00.335929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.856 [2024-07-12 12:02:00.335947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.856 [2024-07-12 12:02:00.339523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.116 [2024-07-12 12:02:00.348946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.116 [2024-07-12 12:02:00.349326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.116 [2024-07-12 12:02:00.349360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.116 [2024-07-12 12:02:00.349379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.116 [2024-07-12 12:02:00.349618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.116 [2024-07-12 12:02:00.349863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.116 [2024-07-12 12:02:00.349900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.116 [2024-07-12 12:02:00.349917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.116 [2024-07-12 12:02:00.353543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.116 [2024-07-12 12:02:00.362827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.116 [2024-07-12 12:02:00.363227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.116 [2024-07-12 12:02:00.363260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.116 [2024-07-12 12:02:00.363278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.116 [2024-07-12 12:02:00.363516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.116 [2024-07-12 12:02:00.363760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.116 [2024-07-12 12:02:00.363786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.116 [2024-07-12 12:02:00.363803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.367389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.376667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.377043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.377076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.377094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.377332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.377576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.377602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.377618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.381203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.390700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.391095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.391127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.391145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.391383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.391626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.391653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.391669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.395257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.404745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.405154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.405186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.405204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.405443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.405685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.405710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.405726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.409310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.418586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.418986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.419019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.419037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.419277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.419519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.419545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.419562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.423144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.432423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.432917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.432949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.432968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.433213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.433457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.433482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.433499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.437078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.446353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.446831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.446864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.446894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.447133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.447377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.447402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.447418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.450997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.460279] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.460655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.460685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.460703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.460956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.461200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.461225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.461242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.464813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.474324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.474752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.474787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.474806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.475056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.475300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.475326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.475348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.478930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.488209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.488603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.488635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.488654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.488916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.489161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.489187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.489203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.492775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.502068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.502474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.502506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.502524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.502762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.503017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.117 [2024-07-12 12:02:00.503042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.117 [2024-07-12 12:02:00.503058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.117 [2024-07-12 12:02:00.506643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.117 [2024-07-12 12:02:00.515942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.117 [2024-07-12 12:02:00.516302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.117 [2024-07-12 12:02:00.516334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.117 [2024-07-12 12:02:00.516352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.117 [2024-07-12 12:02:00.516591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.117 [2024-07-12 12:02:00.516834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.516860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.516887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.520474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.529974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.530367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.530399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.530417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.530655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.530911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.530937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.530954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.534525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.544027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.544420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.544451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.544469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.544709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.544964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.544991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.545007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.548580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.557888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.558266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.558298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.558316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.558555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.558798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.558824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.558840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.562426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.571922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.572314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.572346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.572364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.572608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.572851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.572889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.572907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.576478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.585754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.586138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.586171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.586189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.586427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.586672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.586697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.586714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.590298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.118 [2024-07-12 12:02:00.599799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.118 [2024-07-12 12:02:00.600213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.118 [2024-07-12 12:02:00.600246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.118 [2024-07-12 12:02:00.600264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.118 [2024-07-12 12:02:00.600502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.118 [2024-07-12 12:02:00.600746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.118 [2024-07-12 12:02:00.600770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.118 [2024-07-12 12:02:00.600789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.118 [2024-07-12 12:02:00.604425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.613902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.614287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.614320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.614339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.614579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.614821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.379 [2024-07-12 12:02:00.614848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.379 [2024-07-12 12:02:00.614883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.379 [2024-07-12 12:02:00.618483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.627816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.628181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.628214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.628233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.628472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.628715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.379 [2024-07-12 12:02:00.628740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.379 [2024-07-12 12:02:00.628756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.379 [2024-07-12 12:02:00.632336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.641828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.642213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.642245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.642263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.642502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.642744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.379 [2024-07-12 12:02:00.642770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.379 [2024-07-12 12:02:00.642786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.379 [2024-07-12 12:02:00.646367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.655873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.656278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.656309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.656327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.656566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.656809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.379 [2024-07-12 12:02:00.656834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.379 [2024-07-12 12:02:00.656851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.379 [2024-07-12 12:02:00.660437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.669810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.670167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.670205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.670225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.670464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.670709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.379 [2024-07-12 12:02:00.670736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.379 [2024-07-12 12:02:00.670752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.379 [2024-07-12 12:02:00.674337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.379 [2024-07-12 12:02:00.683828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.379 [2024-07-12 12:02:00.684230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-12 12:02:00.684263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.379 [2024-07-12 12:02:00.684282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.379 [2024-07-12 12:02:00.684521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.379 [2024-07-12 12:02:00.684766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.684792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.684809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.688396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.697677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.698048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.698081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.698099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.698338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.698583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.698609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.698626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.702210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.711709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.712086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.712119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.712137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.712376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.712629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.712656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.712672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.716258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.725752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.726141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.726174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.726193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.726433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.726677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.726703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.726719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.730302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.739789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.740188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.740221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.740239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.740477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.740720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.740746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.740762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.744344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.753829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.754202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.754235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.754253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.754493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.754737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.754763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.754780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.758368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.767309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.767623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.767650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.767665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.767891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.768092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.768114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.768129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.771082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.780559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.780914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.780943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.780959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.781195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.781390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.781411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.781424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.784382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.793740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.794091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.794120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.794137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.794375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.794584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.794606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.794619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.797649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.807082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.807507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.807538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.807560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.807802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.808042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.808065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.808079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.811034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.820305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.820658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.820686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.820701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.820944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.821160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.380 [2024-07-12 12:02:00.821183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.380 [2024-07-12 12:02:00.821196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.380 [2024-07-12 12:02:00.824092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.380 [2024-07-12 12:02:00.833472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.380 [2024-07-12 12:02:00.833856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-12 12:02:00.833890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.380 [2024-07-12 12:02:00.833922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.380 [2024-07-12 12:02:00.834165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.380 [2024-07-12 12:02:00.834377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.381 [2024-07-12 12:02:00.834398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.381 [2024-07-12 12:02:00.834412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.381 [2024-07-12 12:02:00.837366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.381 [2024-07-12 12:02:00.846747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.381 [2024-07-12 12:02:00.847147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-12 12:02:00.847176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.381 [2024-07-12 12:02:00.847193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.381 [2024-07-12 12:02:00.847423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.381 [2024-07-12 12:02:00.847635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.381 [2024-07-12 12:02:00.847663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.381 [2024-07-12 12:02:00.847678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.381 [2024-07-12 12:02:00.850969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.381 [2024-07-12 12:02:00.860123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.381 [2024-07-12 12:02:00.860502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-12 12:02:00.860532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.381 [2024-07-12 12:02:00.860548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.381 [2024-07-12 12:02:00.860785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.381 [2024-07-12 12:02:00.861032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.381 [2024-07-12 12:02:00.861055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.381 [2024-07-12 12:02:00.861070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.381 [2024-07-12 12:02:00.864127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.873494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.873853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.873889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.873906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.874144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.874354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.874376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.874390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.877563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.886808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.887319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.887349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.887365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.887607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.887817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.887838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.887875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.890834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.900080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.900511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.900540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.900557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.900797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.901034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.901056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.901071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.904063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.913335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.913756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.913785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.913803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.914055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.914288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.914309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.914322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.917279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.926568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.926954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.926983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.927000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.927227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.927436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.927457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.927470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.930427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.939913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.940341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.940370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.940386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.940631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.940841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.940862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.940899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.943834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.953292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.953713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.953742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.953758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.954008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.954222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.954244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.954257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.957210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.966586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.967013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.967043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.642 [2024-07-12 12:02:00.967059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.642 [2024-07-12 12:02:00.967301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.642 [2024-07-12 12:02:00.967510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.642 [2024-07-12 12:02:00.967531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.642 [2024-07-12 12:02:00.967544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.642 [2024-07-12 12:02:00.970552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.642 [2024-07-12 12:02:00.979965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.642 [2024-07-12 12:02:00.980350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.642 [2024-07-12 12:02:00.980378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:00.980394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:00.980635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:00.980845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:00.980873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:00.980909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:00.983844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:00.993228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:00.993615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:00.993643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:00.993659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:00.993891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:00.994108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:00.994130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:00.994143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:00.997096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.006458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.006814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.006843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.006860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.007112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.007322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.007344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.007357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.010350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.019801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.020149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.020178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.020211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.020446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.020641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.020661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.020674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.023631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.033039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.033417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.033445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.033461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.033698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.033934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.033956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.033970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.036924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.046326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.046680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.046707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.046723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.046972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.047208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.047244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.047258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.050213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.059491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.059878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.059906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.059938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.060182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.060394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.060416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.060429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.063463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.072805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.073227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.073256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.073272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.073526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.073722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.073743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.073756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.076753] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.086018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.086424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.086453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.086469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.086692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.086929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.086951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.086965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.089915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.099315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.099701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.099731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.099747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.099971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.100215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.100251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.100265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.103517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.112587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.643 [2024-07-12 12:02:01.113006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.643 [2024-07-12 12:02:01.113035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.643 [2024-07-12 12:02:01.113051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.643 [2024-07-12 12:02:01.113296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.643 [2024-07-12 12:02:01.113504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.643 [2024-07-12 12:02:01.113526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.643 [2024-07-12 12:02:01.113544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.643 [2024-07-12 12:02:01.116587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.643 [2024-07-12 12:02:01.125934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.644 [2024-07-12 12:02:01.126325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.644 [2024-07-12 12:02:01.126355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.644 [2024-07-12 12:02:01.126371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.644 [2024-07-12 12:02:01.126614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.644 [2024-07-12 12:02:01.126824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.644 [2024-07-12 12:02:01.126859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.644 [2024-07-12 12:02:01.126883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.644 [2024-07-12 12:02:01.129894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.139360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.139735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.139764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.139781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.140027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.140240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.140262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.140276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.143280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.152705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.153086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.153116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.153132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.153373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.153582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.153603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.153617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.156572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.165978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.166419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.166454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.166472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.166714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.166936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.166958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.166972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.169926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.179281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.179585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.179613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.179628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.179845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.180054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.180077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.180090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.183040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.192604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.193025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.193055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.193071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.193312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.193506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.193527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.193540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.196523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.205883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.206326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.206355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.206372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.206613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.206827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.206864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.206889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.209836] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.219138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.219568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.219596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.219612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.219845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.220069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.220091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.220104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.223070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.232486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.232840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.232889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.232907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.233153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.233364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.233384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.233398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.236357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.245699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.246080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.246109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.246126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.246367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.246578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.246600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.246613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.249588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.902 [2024-07-12 12:02:01.259043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.902 [2024-07-12 12:02:01.259416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.902 [2024-07-12 12:02:01.259444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.902 [2024-07-12 12:02:01.259459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.902 [2024-07-12 12:02:01.259696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.902 [2024-07-12 12:02:01.259917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.902 [2024-07-12 12:02:01.259939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.902 [2024-07-12 12:02:01.259952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.902 [2024-07-12 12:02:01.262920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.272329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.272728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.272756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.272772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.273011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.273249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.273270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.273283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.276365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.285699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.286078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.286107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.286123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.286377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.286587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.286607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.286620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.289572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.299020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.299359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.299387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.299407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.299629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.299854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.299885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.299900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.302956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.312349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.312707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.312735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.312751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.313002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.313226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.313247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.313260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.316305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.325792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.326245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.326273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.326288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.326501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.326711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.326731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.326744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.329800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.339177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.339601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.339629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.339645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.339861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.340112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.340140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.340155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.343198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.352405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.352807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.352835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.352851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.353072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.353304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.353325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.353339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.356663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.365863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.366286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.366329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.366349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.366591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.366790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.366810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.366823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.369959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.379325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.379699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.379727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.379742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.379968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.380211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.380231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.380243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.903 [2024-07-12 12:02:01.383360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.903 [2024-07-12 12:02:01.392766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.903 [2024-07-12 12:02:01.393200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.903 [2024-07-12 12:02:01.393230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:11.903 [2024-07-12 12:02:01.393246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:11.903 [2024-07-12 12:02:01.393461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:11.903 [2024-07-12 12:02:01.393694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.903 [2024-07-12 12:02:01.393715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.903 [2024-07-12 12:02:01.393728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.396821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.406126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.406486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.406514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.406531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.406774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.406998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.407018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.407032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.410487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.419977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.420346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.420377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.420395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.420633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.420887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.420912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.420927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.424511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.434014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.434378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.434409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.434426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.434671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.434928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.434952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.434968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.438602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.447902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.448297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.448329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.448346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.448584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.448827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.448851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.448877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.452450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.461945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.462341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.462372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.462389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.462628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.462879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.462904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.462919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.466483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.475972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.476365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.476395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.476413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.476651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.476904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.476929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.476951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.480518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.490011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.490414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.490445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.490462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.490699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.490953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.490977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.490993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.161 [2024-07-12 12:02:01.494565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.161 [2024-07-12 12:02:01.504054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.161 [2024-07-12 12:02:01.504456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.161 [2024-07-12 12:02:01.504487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.161 [2024-07-12 12:02:01.504504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.161 [2024-07-12 12:02:01.504742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.161 [2024-07-12 12:02:01.504996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.161 [2024-07-12 12:02:01.505020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.161 [2024-07-12 12:02:01.505035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.508612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.517889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.518257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.518288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.518305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.518543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.518786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.518810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.518825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.522404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.531896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.532269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.532300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.532317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.532555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.532797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.532822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.532837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.536416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.545905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.546288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.546318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.546335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.546573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.546815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.546839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.546854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.550437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.559931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.560328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.560358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.560376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.560614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.560856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.560890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.560906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.564474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.573965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.574355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.574385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.574403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.574641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.574901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.574925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.574941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1027438 Killed "${NVMF_APP[@]}" "$@" 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 [2024-07-12 12:02:01.578512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1028511 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1028511 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1028511 ']' 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:12.162 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 [2024-07-12 12:02:01.587803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.588174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.588204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.588222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.588459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.588702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.588726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.588741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.592320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.601794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.602170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.602199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.602214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.602428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.602652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.602674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.602689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.605941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.615248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.615636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.615664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.615680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.615920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.616133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.616168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.616182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.619410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.162 [2024-07-12 12:02:01.627336] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:12.162 [2024-07-12 12:02:01.627390] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.162 [2024-07-12 12:02:01.628654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.162 [2024-07-12 12:02:01.629022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.162 [2024-07-12 12:02:01.629050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.162 [2024-07-12 12:02:01.629067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.162 [2024-07-12 12:02:01.629295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.162 [2024-07-12 12:02:01.629510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.162 [2024-07-12 12:02:01.629530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.162 [2024-07-12 12:02:01.629542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.162 [2024-07-12 12:02:01.632612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.163 [2024-07-12 12:02:01.641934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.163 [2024-07-12 12:02:01.642289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.163 [2024-07-12 12:02:01.642317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.163 [2024-07-12 12:02:01.642332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.163 [2024-07-12 12:02:01.642554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.163 [2024-07-12 12:02:01.642768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.163 [2024-07-12 12:02:01.642793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.163 [2024-07-12 12:02:01.642806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.163 [2024-07-12 12:02:01.645792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.423 [2024-07-12 12:02:01.655431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.423 [2024-07-12 12:02:01.655750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.423 [2024-07-12 12:02:01.655792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.423 [2024-07-12 12:02:01.655808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.423 [2024-07-12 12:02:01.656060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.423 [2024-07-12 12:02:01.656294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.423 [2024-07-12 12:02:01.656314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.423 [2024-07-12 12:02:01.656327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.423 [2024-07-12 12:02:01.659477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.423 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.423 [2024-07-12 12:02:01.669364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.423 [2024-07-12 12:02:01.669757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.423 [2024-07-12 12:02:01.669785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.423 [2024-07-12 12:02:01.669801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.423 [2024-07-12 12:02:01.670038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.423 [2024-07-12 12:02:01.670260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.423 [2024-07-12 12:02:01.670280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.423 [2024-07-12 12:02:01.670293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.423 [2024-07-12 12:02:01.673797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.423 [2024-07-12 12:02:01.683247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.423 [2024-07-12 12:02:01.683634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.423 [2024-07-12 12:02:01.683661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.423 [2024-07-12 12:02:01.683676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.423 [2024-07-12 12:02:01.683920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.423 [2024-07-12 12:02:01.684120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.423 [2024-07-12 12:02:01.684139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.423 [2024-07-12 12:02:01.684152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.423 [2024-07-12 12:02:01.687607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.423 [2024-07-12 12:02:01.697030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.423 [2024-07-12 12:02:01.697145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:12.423 [2024-07-12 12:02:01.697460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.423 [2024-07-12 12:02:01.697504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.423 [2024-07-12 12:02:01.697523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.423 [2024-07-12 12:02:01.697781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.423 [2024-07-12 12:02:01.698011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.423 [2024-07-12 12:02:01.698032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.423 [2024-07-12 12:02:01.698046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.423 [2024-07-12 12:02:01.701540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.423 [2024-07-12 12:02:01.710946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.423 [2024-07-12 12:02:01.711442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.423 [2024-07-12 12:02:01.711478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.423 [2024-07-12 12:02:01.711497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.423 [2024-07-12 12:02:01.711760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.423 [2024-07-12 12:02:01.711972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.423 [2024-07-12 12:02:01.711993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.423 [2024-07-12 12:02:01.712008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.423 [2024-07-12 12:02:01.715466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.724773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.725198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.725244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.725262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.725514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.725713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.725733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.725746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.729228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.738729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.739144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.739173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.739200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.739443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.739643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.739662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.739676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.743107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.752555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.752957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.753001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.753017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.753247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.753482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.753502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.753515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.756968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.766424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.766938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.766996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.767016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.767295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.767498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.767518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.767534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.770965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.780250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.780617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.780645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.780676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.780945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.781152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.781183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.781212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.784357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.793781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.794198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.794230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.794248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.794486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.794729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.794753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.794769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.798287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.807558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.807963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.807992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.808008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.808248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.808491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.808515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.808530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.812034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.813672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.424 [2024-07-12 12:02:01.813724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.424 [2024-07-12 12:02:01.813740] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.424 [2024-07-12 12:02:01.813753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.424 [2024-07-12 12:02:01.813765] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.424 [2024-07-12 12:02:01.813850] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.424 [2024-07-12 12:02:01.814057] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.424 [2024-07-12 12:02:01.814061] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.424 [2024-07-12 12:02:01.821054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.821501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.821535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.821561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.821798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.822044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.822067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.822083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.825276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.834595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.835093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.835130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.835149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.835387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.835603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.835625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.835641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.838688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.848188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.848707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.424 [2024-07-12 12:02:01.848745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.424 [2024-07-12 12:02:01.848763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.424 [2024-07-12 12:02:01.849008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.424 [2024-07-12 12:02:01.849225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.424 [2024-07-12 12:02:01.849246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.424 [2024-07-12 12:02:01.849262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.424 [2024-07-12 12:02:01.852429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.424 [2024-07-12 12:02:01.861796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.424 [2024-07-12 12:02:01.862327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.425 [2024-07-12 12:02:01.862364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.425 [2024-07-12 12:02:01.862383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.425 [2024-07-12 12:02:01.862607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.425 [2024-07-12 12:02:01.862829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.425 [2024-07-12 12:02:01.862862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.425 [2024-07-12 12:02:01.862889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.425 [2024-07-12 12:02:01.866108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.425 [2024-07-12 12:02:01.875538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.425 [2024-07-12 12:02:01.875989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.425 [2024-07-12 12:02:01.876024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.425 [2024-07-12 12:02:01.876043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.425 [2024-07-12 12:02:01.876265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.425 [2024-07-12 12:02:01.876487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.425 [2024-07-12 12:02:01.876509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.425 [2024-07-12 12:02:01.876525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.425 [2024-07-12 12:02:01.879787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.425 [2024-07-12 12:02:01.889007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.425 [2024-07-12 12:02:01.889514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.425 [2024-07-12 12:02:01.889552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.425 [2024-07-12 12:02:01.889571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.425 [2024-07-12 12:02:01.889812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.425 [2024-07-12 12:02:01.890063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.425 [2024-07-12 12:02:01.890086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.425 [2024-07-12 12:02:01.890103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.425 [2024-07-12 12:02:01.893286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.425 [2024-07-12 12:02:01.902587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.425 [2024-07-12 12:02:01.902939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.425 [2024-07-12 12:02:01.902968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.425 [2024-07-12 12:02:01.902984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.425 [2024-07-12 12:02:01.903200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.425 [2024-07-12 12:02:01.903445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.425 [2024-07-12 12:02:01.903467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.425 [2024-07-12 12:02:01.903481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.425 [2024-07-12 12:02:01.906674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 [2024-07-12 12:02:01.916166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.916541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.916571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.916587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.916803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.917036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.917059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.917073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.920402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 [2024-07-12 12:02:01.929700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:12.685 [2024-07-12 12:02:01.930053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.930083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.930100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.685 [2024-07-12 12:02:01.930315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.685 [2024-07-12 12:02:01.930534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.930556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.930570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.933788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 [2024-07-12 12:02:01.943325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.943697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.943725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.943742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.943967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.944202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.944224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.944237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.947479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.685 [2024-07-12 12:02:01.956942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.957089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.685 [2024-07-12 12:02:01.957304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.957332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.957348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.957562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.957790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.957811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.957824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.961163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.685 [2024-07-12 12:02:01.970391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.970752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.970780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.970796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.971018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.971264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.971285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.971298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.974452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 [2024-07-12 12:02:01.984048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.984432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.984461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.984478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.984708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.984936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.984963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.984979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 [2024-07-12 12:02:01.988135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.685 [2024-07-12 12:02:01.997608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.685 [2024-07-12 12:02:01.998131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.685 [2024-07-12 12:02:01.998167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.685 [2024-07-12 12:02:01.998186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.685 [2024-07-12 12:02:01.998425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.685 [2024-07-12 12:02:01.998641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.685 [2024-07-12 12:02:01.998662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.685 [2024-07-12 12:02:01.998678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.685 Malloc0 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.685 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.686 12:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.686 [2024-07-12 12:02:02.001950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.686 [2024-07-12 12:02:02.011284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.686 [2024-07-12 12:02:02.011641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.686 [2024-07-12 12:02:02.011670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbc70 with addr=10.0.0.2, port=4420 00:25:12.686 [2024-07-12 12:02:02.011686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbc70 is same with the state(5) to be set 00:25:12.686 [2024-07-12 12:02:02.011910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbc70 (9): Bad file descriptor 00:25:12.686 [2024-07-12 12:02:02.012130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.686 [2024-07-12 12:02:02.012166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.686 [2024-07-12 12:02:02.012180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.686 [2024-07-12 12:02:02.015453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.686 [2024-07-12 12:02:02.020543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.686 12:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1027741 00:25:12.686 [2024-07-12 12:02:02.025056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.686 [2024-07-12 12:02:02.068122] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:22.666 00:25:22.666 Latency(us) 00:25:22.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.666 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.666 Verification LBA range: start 0x0 length 0x4000 00:25:22.666 Nvme1n1 : 15.00 6719.71 26.25 8918.09 0.00 8160.59 776.72 18835.53 00:25:22.666 =================================================================================================================== 00:25:22.666 Total : 6719.71 26.25 8918.09 0.00 8160.59 776.72 18835.53 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.666 rmmod nvme_tcp 00:25:22.666 rmmod nvme_fabrics 00:25:22.666 rmmod nvme_keyring 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1028511 ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1028511 ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1028511' 00:25:22.666 killing process with pid 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1028511 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.666 12:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.570 12:02:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.570 00:25:24.570 real 0m22.661s 00:25:24.570 user 1m1.170s 00:25:24.570 sys 0m4.079s 00:25:24.570 12:02:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:24.570 12:02:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:24.570 ************************************ 00:25:24.570 END TEST nvmf_bdevperf 00:25:24.570 ************************************ 00:25:24.570 12:02:13 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:24.570 12:02:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:24.570 12:02:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:24.570 12:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.570 ************************************ 00:25:24.570 START TEST nvmf_target_disconnect 00:25:24.570 ************************************ 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:24.570 * Looking for test storage... 00:25:24.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:24.570 12:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.571 12:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.473 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.474 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.733 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.733 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.733 12:02:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:25:26.733 00:25:26.733 --- 10.0.0.2 ping statistics --- 00:25:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.733 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:25:26.733 00:25:26.733 --- 10.0.0.1 ping statistics --- 00:25:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.733 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:26.733 ************************************ 00:25:26.733 START TEST nvmf_target_disconnect_tc1 00:25:26.733 ************************************ 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:26.733 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.733 [2024-07-12 12:02:16.185098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.733 [2024-07-12 12:02:16.185181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c5f90 with addr=10.0.0.2, port=4420 00:25:26.733 [2024-07-12 12:02:16.185221] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.733 [2024-07-12 12:02:16.185242] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.733 [2024-07-12 12:02:16.185257] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:26.733 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:26.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:26.733 Initializing NVMe Controllers 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:26.733 00:25:26.733 real 0m0.094s 00:25:26.733 user 0m0.041s 00:25:26.733 sys 0m0.052s 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.733 ************************************ 00:25:26.733 END TEST nvmf_target_disconnect_tc1 00:25:26.733 ************************************ 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:26.733 12:02:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:26.991 ************************************ 00:25:26.991 START TEST nvmf_target_disconnect_tc2 00:25:26.991 ************************************ 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1031663 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1031663 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1031663 ']' 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:26.991 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:26.991 [2024-07-12 12:02:16.290773] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:26.991 [2024-07-12 12:02:16.290859] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.991 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.991 [2024-07-12 12:02:16.355460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.991 [2024-07-12 12:02:16.463613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.991 [2024-07-12 12:02:16.463676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.991 [2024-07-12 12:02:16.463705] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.991 [2024-07-12 12:02:16.463716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.991 [2024-07-12 12:02:16.463725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.991 [2024-07-12 12:02:16.463806] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:25:26.991 [2024-07-12 12:02:16.463903] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:25:26.991 [2024-07-12 12:02:16.463935] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:25:26.991 [2024-07-12 12:02:16.463938] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.249 Malloc0 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.249 [2024-07-12 12:02:16.636488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.249 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.250 [2024-07-12 12:02:16.664716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1031690 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.250 12:02:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:27.250 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.834 12:02:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1031663 00:25:29.834 12:02:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 [2024-07-12 12:02:18.690563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 [2024-07-12 12:02:18.690946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Write completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 [2024-07-12 12:02:18.691240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.834 starting I/O failed 00:25:29.834 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Write completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 Read completed with error (sct=0, sc=8) 00:25:29.835 starting I/O failed 00:25:29.835 [2024-07-12 12:02:18.691579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:29.835 [2024-07-12 12:02:18.691738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.691768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.691888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.691915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.692927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.692953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.693836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.693993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.694134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.694312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.694585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.694753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.694908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.694934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.695967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.695992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.696891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.696918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.697962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.697987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.698909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.698937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.699031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.699058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.699154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.699191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.699278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.699304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.699416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.835 [2024-07-12 12:02:18.699442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.835 qpair failed and we were unable to recover it. 00:25:29.835 [2024-07-12 12:02:18.699580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.699606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.699707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.699733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.699857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.699910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.700952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.700984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.701887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.701996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.702883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.702976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.703862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.703984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.704957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.704984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.705871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.705986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.706833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.706976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.707965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.707992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.708141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.708182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.708348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.708400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.708617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.708670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.708785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.708816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.708951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.708978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.709070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.709096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.709194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.836 [2024-07-12 12:02:18.709219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.836 qpair failed and we were unable to recover it. 00:25:29.836 [2024-07-12 12:02:18.709314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.709357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.709561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.709588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.709702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.709727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.709905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.709932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.710898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.710929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.711896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.711922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.712921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.712960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.713075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.713255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.713496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.713666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.713827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.713979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.714863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.714895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.715952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.715978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.716891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.716917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.717913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.717939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.718054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.718079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.718155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.837 [2024-07-12 12:02:18.718180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.837 qpair failed and we were unable to recover it. 00:25:29.837 [2024-07-12 12:02:18.718317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.718345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.718502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.718529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.718620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.718646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.718790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.718823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.718948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.718973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.719941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.719966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.720860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.720976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.721905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.721930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.722878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.722903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.723903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.723947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.724031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.724057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.724208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.724234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.724400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.724429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.724621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.724682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.724849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.724880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.725956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.725983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.726878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.726907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.727897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.727926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.728065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.728090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.838 qpair failed and we were unable to recover it. 00:25:29.838 [2024-07-12 12:02:18.728218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.838 [2024-07-12 12:02:18.728243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.728356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.728381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.728491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.728527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.728613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.728656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.728806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.728834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.728942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.728967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.729861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.729982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.730922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.730963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.731914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.731941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.732914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.732940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.733870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.733898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.734871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.839 [2024-07-12 12:02:18.734897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.839 qpair failed and we were unable to recover it. 00:25:29.839 [2024-07-12 12:02:18.735011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.735964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.735989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.736957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.736996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.737887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.737913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.738939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.738964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.739930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.739958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.740922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.740948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.741925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.741950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.742900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.742942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.743950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.743976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.744092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.744118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.744286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.840 [2024-07-12 12:02:18.744314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.840 qpair failed and we were unable to recover it. 00:25:29.840 [2024-07-12 12:02:18.744414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.744440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.744535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.744561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.744705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.744730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.744835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.744860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.744976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.745896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.745925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.746916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.746942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.747813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.747841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.748938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.748979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.749898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.749928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.750947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.750973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.751890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.751932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.752910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.752936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.753922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.753949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.841 qpair failed and we were unable to recover it. 00:25:29.841 [2024-07-12 12:02:18.754032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.841 [2024-07-12 12:02:18.754059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.754877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.754903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.755854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.755901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.756831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.756860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.757812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.757978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.758873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.758990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.759883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.759910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.760905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.760931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.761855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.761991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.762875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.762988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.842 [2024-07-12 12:02:18.763837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.842 qpair failed and we were unable to recover it. 00:25:29.842 [2024-07-12 12:02:18.763961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.763987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.764958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.764985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.765817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.765842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.766862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.766932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.767903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.767929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.768891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.768916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.769844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.769880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.770956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.770983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.771908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.771936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.772851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.772882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.773877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.773921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.774025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.774050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.774132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.774156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.843 [2024-07-12 12:02:18.774315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.843 [2024-07-12 12:02:18.774343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.843 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.774439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.774464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.774567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.774594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.774723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.774761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.774908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.774936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.775895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.775981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.776967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.776992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.777876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.777995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.778114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.778253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357bb0 is same with the state(5) to be set 00:25:29.844 [2024-07-12 12:02:18.778397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.778577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.778767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.778910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.778938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.779883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.779976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.780958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.780984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.781965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.781993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.782898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.782925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.783854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.783885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.784002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.784029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.844 [2024-07-12 12:02:18.784152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.844 [2024-07-12 12:02:18.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.844 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.784944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.784975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.785925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.785950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.786834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.786863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.787890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.787919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.788974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.788999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.789968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.789995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.790933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.790960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.791861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.791907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.792950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.792976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.793877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.793993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.794018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.794129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.794154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.794257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.794284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.794373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.794400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.845 qpair failed and we were unable to recover it. 00:25:29.845 [2024-07-12 12:02:18.794501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.845 [2024-07-12 12:02:18.794530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.794617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.794658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.794782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.794807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.794978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.795965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.795992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.796186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.796391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.796598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.796739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.796891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.796976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.797898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.797941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.798951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.798980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.799886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.799974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.800889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.800933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.801892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.801918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.802879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.802914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.803877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.803922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.804912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.804940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.805872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.805898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.806032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.846 [2024-07-12 12:02:18.806060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.846 qpair failed and we were unable to recover it. 00:25:29.846 [2024-07-12 12:02:18.806183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.806211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.806351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.806399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.806510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.806553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.806696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.806721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.806842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.806873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.806974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.807873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.807914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.808817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.808849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.809957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.809988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.810956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.810982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.811873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.811901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.812932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.812976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.813949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.813976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.814891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.814979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.815954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.815998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.816958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.847 [2024-07-12 12:02:18.816984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.847 qpair failed and we were unable to recover it. 00:25:29.847 [2024-07-12 12:02:18.817083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.817275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.817457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.817607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.817798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.817911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.817939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.818874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.818901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.819839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.819870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.820850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.820881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.821937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.821966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.822928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.822954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.823950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.823977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.824871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.824899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.825937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.825980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.826885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.826913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.827859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.827990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.828106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.828251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.828419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.828565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.848 qpair failed and we were unable to recover it. 00:25:29.848 [2024-07-12 12:02:18.828721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.848 [2024-07-12 12:02:18.828747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.828870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.828896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.828991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.829888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.829930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.830968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.830995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.831828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.831983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.832915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.832942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.833892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.833979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.834860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.834977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.835970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.835995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.836848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.836895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.837809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.837838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.838872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.838900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.839002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.839028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.839159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.839202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.839321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.849 [2024-07-12 12:02:18.839365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.849 qpair failed and we were unable to recover it. 00:25:29.849 [2024-07-12 12:02:18.839470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.839499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.839656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.839682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.839799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.839824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.839964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.839992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.840876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.840919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.841902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.841956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.842850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.842884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.843884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.843926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.844904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.844932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.845883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.845922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.846108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.846316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.846524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.846687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.846829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.846975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.847830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.847965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.848967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.848993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.849877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.849904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.850901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.850928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.851041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.851066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.851180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.851206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.851349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.851392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.851504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.851547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.850 qpair failed and we were unable to recover it. 00:25:29.850 [2024-07-12 12:02:18.851665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.850 [2024-07-12 12:02:18.851709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.851855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.851906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.852855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.852988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.853875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.853983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.854877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.854916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.855906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.855932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.856900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.856992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.857917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.857943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.858837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.858870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.859915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.859944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.860928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.860955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.861950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.861976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.862070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.862096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.862202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.862229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.862347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.851 [2024-07-12 12:02:18.862374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.851 qpair failed and we were unable to recover it. 00:25:29.851 [2024-07-12 12:02:18.862492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.862519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.862611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.862637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.862712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.862738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.862818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.862844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.862973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.863843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.863988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.864856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.864982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.865966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.865992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.866918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.866948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.867834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.867990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.868900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.868926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.869940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.869983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.870914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.870941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.871894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.871920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.872874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.872913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.873886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.852 [2024-07-12 12:02:18.873929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.852 qpair failed and we were unable to recover it. 00:25:29.852 [2024-07-12 12:02:18.874045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.874827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.874989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.875954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.875994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.876971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.876998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.877149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.877207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.877393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.877455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.877588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.877637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.877762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.877790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.877915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.877941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.878930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.878955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.879876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.879998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.880899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.880983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.881826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.881979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.882845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.882890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.883961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.883987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.884157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.884319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.884512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.884745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.884864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.884986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.885790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.885965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.886004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.886127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.886155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.886271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.886297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.886408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.853 [2024-07-12 12:02:18.886433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.853 qpair failed and we were unable to recover it. 00:25:29.853 [2024-07-12 12:02:18.886526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.886555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.886677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.886705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.886874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.886918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.887915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.887954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.888950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.888994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.889945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.889973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.890112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.890141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.890277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.890305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.890498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.890549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.890678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.890706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.890838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.890888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.891961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.891988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.892940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.892965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.893874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.893980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.894849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.894977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.895957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.895985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.896924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.896950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.897065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.897090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.897170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.854 [2024-07-12 12:02:18.897195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.854 qpair failed and we were unable to recover it. 00:25:29.854 [2024-07-12 12:02:18.897325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.897352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.897543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.897571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.897721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.897748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.897861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.897930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.898942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.898968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.899885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.899924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.900841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.900877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.901811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.901850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.902954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.902982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.903907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.903933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.904880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.904906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.905886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.905913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.906833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.906984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.907899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.907924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.908908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.908947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.909042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.909069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.909182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.909226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.909317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.909344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.909464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.909490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.855 qpair failed and we were unable to recover it. 00:25:29.855 [2024-07-12 12:02:18.909592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.855 [2024-07-12 12:02:18.909618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.909708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.909734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.909844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.909875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.909973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.910927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.910954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.911875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.911992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.912155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.912323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.912535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.912710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.912882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.912909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.913849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.913884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.914848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.914976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.915893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.915983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.916875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.916903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.917897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.917923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.918884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.918910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.919970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.919999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.920124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.920152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.920272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.920300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.920431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.856 [2024-07-12 12:02:18.920459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.856 qpair failed and we were unable to recover it. 00:25:29.856 [2024-07-12 12:02:18.920648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.920696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.920816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.920842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.920986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.921885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.921928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.922835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.922988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.923951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.923979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.924139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.924337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.924554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.924672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.924843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.924974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.925843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.925880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.926880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.926993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.927940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.927984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.928095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.928124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.928242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.928292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.928477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.928527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.928694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.928755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.928917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.928947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.929883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.929909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.930864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.930895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.931889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.931929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.857 [2024-07-12 12:02:18.932706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.857 qpair failed and we were unable to recover it. 00:25:29.857 [2024-07-12 12:02:18.932827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.932853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.932984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.933167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.933326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.933482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.933666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.933821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.933850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.934833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.934860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.935914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.935942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.936890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.936916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.937883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.937909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.938968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.938993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.939927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.939953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.940944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.940983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.941915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.941942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.942892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.942919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.943006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.943032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.943150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.943176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.943264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.858 [2024-07-12 12:02:18.943290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.858 qpair failed and we were unable to recover it. 00:25:29.858 [2024-07-12 12:02:18.943397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.943423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.943505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.943531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.943614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.943641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.943758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.943784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.943874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.943900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.943995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.944914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.944942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.945940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.945971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.946889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.946985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.947891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.947918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.948017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.948042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.948134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.859 [2024-07-12 12:02:18.948160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.859 qpair failed and we were unable to recover it. 00:25:29.859 [2024-07-12 12:02:18.948256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.948899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.948988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.949889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.949917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.950886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.950912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.951848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.951887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.952948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.860 [2024-07-12 12:02:18.952974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.860 qpair failed and we were unable to recover it. 00:25:29.860 [2024-07-12 12:02:18.953089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.953882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.953976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.954880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.954907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.955886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.955913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.956972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.956998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.957918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.957944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.958068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.958095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.861 [2024-07-12 12:02:18.958185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.861 [2024-07-12 12:02:18.958210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.861 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.958320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.958346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.958460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.958487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.958611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.958637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.958749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.958774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.958893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.958919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.959906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.959932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.960942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.960968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.961897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.961923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.962021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.962047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.962144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.962169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.962288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.862 [2024-07-12 12:02:18.962313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.862 qpair failed and we were unable to recover it. 00:25:29.862 [2024-07-12 12:02:18.962433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.962476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.962564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.962590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.962671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.962697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.962778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.962804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.962888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.962915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.963911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.963940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.964857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.964982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.965932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.965958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.966912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.966999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.967025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.967112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.967139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.967242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.967271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.967371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.863 [2024-07-12 12:02:18.967399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.863 qpair failed and we were unable to recover it. 00:25:29.863 [2024-07-12 12:02:18.967527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.967556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.967684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.967712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.967813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.967842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.967950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.967976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.968916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.968942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.969883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.969927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.970876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.970990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.971825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.971854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.972863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.864 [2024-07-12 12:02:18.972899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.864 qpair failed and we were unable to recover it. 00:25:29.864 [2024-07-12 12:02:18.973017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.973885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.973911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.974860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.974893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.975953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.975981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.976859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.976983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.977878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.977904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.865 [2024-07-12 12:02:18.978858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.865 [2024-07-12 12:02:18.978895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.865 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.979891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.979996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.980896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.980984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.981910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.981937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.982878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.982922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.983969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.983995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.984113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.984139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.984253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.984279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.984393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.984419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.866 [2024-07-12 12:02:18.984510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.866 [2024-07-12 12:02:18.984536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.866 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.984624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.984650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.984738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.984765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.984906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.984932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.985884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.986932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.986958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.987045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.987071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.987182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.987207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.867 qpair failed and we were unable to recover it. 00:25:29.867 [2024-07-12 12:02:18.987298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.867 [2024-07-12 12:02:18.987324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.987409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.987436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.987553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.987579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.987672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.987698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.987793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.987820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.987920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.987947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.988896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.988922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.989909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.989936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.990917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.990944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.991890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.991916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.992034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.992060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.992171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.992197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.992313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.992339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.992427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.868 [2024-07-12 12:02:18.992452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.868 qpair failed and we were unable to recover it. 00:25:29.868 [2024-07-12 12:02:18.992545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.992572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.992722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.992748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.992832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.992858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.992981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.993902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.993928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.994914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.994943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.995951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.995977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.996972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.996998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.997079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.997106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.997193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.997220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.997312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.869 [2024-07-12 12:02:18.997339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.869 qpair failed and we were unable to recover it. 00:25:29.869 [2024-07-12 12:02:18.997458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.997483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.997574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.997600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.997714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.997740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.997829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.997855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.997972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.997998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.998890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.998983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:18.999947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:18.999973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.000903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.000987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.001943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.001970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.002065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.002091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.002213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.002238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.002336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.002365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.870 [2024-07-12 12:02:19.002466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.870 [2024-07-12 12:02:19.002492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.870 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.002583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.002609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.002727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.002753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.002845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.002877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.002973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.003860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.003992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.004860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.004984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.005938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.005964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.006937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.006964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.007054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.007081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.007171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.007197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.871 qpair failed and we were unable to recover it. 00:25:29.871 [2024-07-12 12:02:19.007289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.871 [2024-07-12 12:02:19.007315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.007404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.007429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.007542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.007569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.007662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.007690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.007772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.007798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.007885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.007912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.008953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.008979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.009885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.009911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.010818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.010992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.011910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.011936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.012026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.012059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.012151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.872 [2024-07-12 12:02:19.012176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.872 qpair failed and we were unable to recover it. 00:25:29.872 [2024-07-12 12:02:19.012260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.012402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.012528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.012667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.012780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.012922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.012949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.013939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.013966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.014940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.014967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.015894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.015983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.016907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.873 [2024-07-12 12:02:19.016933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.873 qpair failed and we were unable to recover it. 00:25:29.873 [2024-07-12 12:02:19.017047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.017933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.017960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.018854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.018979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.019927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.019954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.020856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.020891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.021863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.021896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.874 [2024-07-12 12:02:19.022672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.874 [2024-07-12 12:02:19.022698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.874 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.022784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.022810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.022925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.022951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.023956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.023983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.024938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.024964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.025912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.025938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.026925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.026951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.027922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.027948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.028071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.028097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.028186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.028212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.028328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.875 [2024-07-12 12:02:19.028353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.875 qpair failed and we were unable to recover it. 00:25:29.875 [2024-07-12 12:02:19.028440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.028465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.028560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.028585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.028702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.028727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.028834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.028863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.028969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.028995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.029963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.029989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.030898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.030926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.031903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.031930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.876 [2024-07-12 12:02:19.032899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.876 [2024-07-12 12:02:19.032926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.876 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.033901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.033928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.034973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.034998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.035857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.035999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.036871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.036901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.037887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.037998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.038023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.038141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.038167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.038286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.877 [2024-07-12 12:02:19.038312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.877 qpair failed and we were unable to recover it. 00:25:29.877 [2024-07-12 12:02:19.038397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.038424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.038564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.038604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.038697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.038724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.038841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.038873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.038967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.038993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.039941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.039967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.040960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.040986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.041922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.041951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.042891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.042933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.043949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.043977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.044105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.044131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.044222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.044248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.044331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.044357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.878 [2024-07-12 12:02:19.044462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.878 [2024-07-12 12:02:19.044510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.878 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.044691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.044720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.044836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.044870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.045959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.045986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.046876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.046920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.047910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.047937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.048907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.048994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.049889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.049995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.050021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.050110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.050136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.050225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.050251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.050357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.879 [2024-07-12 12:02:19.050385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.879 qpair failed and we were unable to recover it. 00:25:29.879 [2024-07-12 12:02:19.050474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.050503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.050619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.050663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.050796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.050825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.050923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.050951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.051894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.051921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.052900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.052988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.053891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.053988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.054925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.054952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.055824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.055879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.056022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.056049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.056139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.880 [2024-07-12 12:02:19.056165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.880 qpair failed and we were unable to recover it. 00:25:29.880 [2024-07-12 12:02:19.056254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.056280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.056387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.056413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.056524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.056553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.056707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.056735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.056834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.056863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.056987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.057941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.057981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.058944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.058983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.059965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.059990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.060882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.060989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.061958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.061984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.062070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.062095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.062196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.062224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.062328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.062355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.881 qpair failed and we were unable to recover it. 00:25:29.881 [2024-07-12 12:02:19.062458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.881 [2024-07-12 12:02:19.062489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.062587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.062616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.062770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.062799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.062939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.062966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.063959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.063985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.064944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.064971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.065974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.065998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.066941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.066968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.067965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.067990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.068098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.068123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.068210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.882 [2024-07-12 12:02:19.068235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.882 qpair failed and we were unable to recover it. 00:25:29.882 [2024-07-12 12:02:19.068319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.068430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.068571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.068680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.068783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.068959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.068984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.069906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.069989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.070890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.070976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.071952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.071980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.072856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.072897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.073947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.073974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.074094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.074119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.074204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.074229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.074322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.074352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.074467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.883 [2024-07-12 12:02:19.074492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.883 qpair failed and we were unable to recover it. 00:25:29.883 [2024-07-12 12:02:19.074633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.074675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.074767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.074795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.074911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.074937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.075879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.075906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.076837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.076982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.077878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.077924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.078935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.078962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.079970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.079995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.080080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.080105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.080191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.884 [2024-07-12 12:02:19.080216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.884 qpair failed and we were unable to recover it. 00:25:29.884 [2024-07-12 12:02:19.080308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.080450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.080562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.080677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.080788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.080914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.080941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.081965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.081992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.082921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.082947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.083959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.083986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.084920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.084960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.085909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.085935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.885 qpair failed and we were unable to recover it. 00:25:29.885 [2024-07-12 12:02:19.086845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.885 [2024-07-12 12:02:19.086898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.086997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.087896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.087923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.088901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.088985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.089927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.089953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.090954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.090981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.091923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.091950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.092859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.092891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.093003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.093029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.093129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.093159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.886 [2024-07-12 12:02:19.093288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.886 [2024-07-12 12:02:19.093317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.886 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.093421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.093450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.093572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.093601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.093729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.093760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.093876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.093932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.094864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.094977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.095877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.095921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.096933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.096959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.097894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.097983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.098968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.098993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.099078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.099104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.099204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.099231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.099322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.099348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.099455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.887 [2024-07-12 12:02:19.099484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.887 qpair failed and we were unable to recover it. 00:25:29.887 [2024-07-12 12:02:19.099595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.099623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.099730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.099758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.099900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.099931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.100905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.100949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.101877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.101905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.102891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.102918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.103877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.103921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.104857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.104891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.105921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.105948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.888 qpair failed and we were unable to recover it. 00:25:29.888 [2024-07-12 12:02:19.106794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.888 [2024-07-12 12:02:19.106837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.106957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.107860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.107895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.108950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.108976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.109873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.109900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.110958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.110983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.111887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.111929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.112874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.112999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.113893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.113949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.114076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.114212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.114325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.114456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.889 [2024-07-12 12:02:19.114639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.889 qpair failed and we were unable to recover it. 00:25:29.889 [2024-07-12 12:02:19.114778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.114809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.114928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.114955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.115908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.115933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.116880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.116925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.117882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.117910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.118892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.118934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.119864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.119992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.120900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.120927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.121891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.121936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.890 [2024-07-12 12:02:19.122030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.890 [2024-07-12 12:02:19.122056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.890 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.122199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.122338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.122525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.122698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.122847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.122974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.123834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.123974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.124146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.124348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.124551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.124703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.124853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.124900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.125879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.125906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.126907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.126946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.127936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.127963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.128858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.128891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.129837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.129969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.891 qpair failed and we were unable to recover it. 00:25:29.891 [2024-07-12 12:02:19.130958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.891 [2024-07-12 12:02:19.130984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.131944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.131970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.132844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.132990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.133861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.133891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.134878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.134992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.135925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.135951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.136887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.136914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.137925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.137951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.138887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.138932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.139902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.892 [2024-07-12 12:02:19.139928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.892 qpair failed and we were unable to recover it. 00:25:29.892 [2024-07-12 12:02:19.140014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.140851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.140903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.141947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.141972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.142899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.142938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.143936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.143965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.144925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.144956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.145903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.145933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.146938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.146965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.893 [2024-07-12 12:02:19.147890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-07-12 12:02:19.147916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.893 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.148915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.148946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.149914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.149943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.150871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.150898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.151924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.151963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.152903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.152930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.153890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.153976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.154088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.154303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.154494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.154694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.154863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.154894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.155929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.155968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.156956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.156994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.157107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.157136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.157226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.157255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.157428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.157475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.157627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-07-12 12:02:19.157672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.894 qpair failed and we were unable to recover it. 00:25:29.894 [2024-07-12 12:02:19.157824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.157877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.158882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.158909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.159960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.159990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.160913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.160939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.161804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.161831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.162968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.162993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.163890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.163945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.164933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.164959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.165840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.165988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.166873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.166913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.167040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-07-12 12:02:19.167067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.895 qpair failed and we were unable to recover it. 00:25:29.895 [2024-07-12 12:02:19.167168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.167297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.167461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.167619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.167780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.167948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.168908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.168934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.169840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.169998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.170952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.170982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.171941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.171972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.172947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.172975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.173973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.173999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.174958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.174997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.175090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.175117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.175273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.175317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.175460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.175488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.175655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.175716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.896 qpair failed and we were unable to recover it. 00:25:29.896 [2024-07-12 12:02:19.175830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.896 [2024-07-12 12:02:19.175856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.175946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.175971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.176970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.176999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.177966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.177992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.178912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.178937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.179912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.179937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.180906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.180945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.181949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.181975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.182845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.182876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.183837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.183882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.184007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.184034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.184145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.184174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.897 [2024-07-12 12:02:19.184329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.897 [2024-07-12 12:02:19.184378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.897 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.184484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.184528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.184664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.184693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.184877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.184910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.185913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.185955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.186862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.186987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.187220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.187412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.187581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.187720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.187880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.187924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.188931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.188969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.189953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.189979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.190892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.190983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.191871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.191929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.192831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.192884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.193955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.193981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.194932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.194958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.195959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.195985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.196066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.196091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.196209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.196235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.196353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.898 [2024-07-12 12:02:19.196381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.898 qpair failed and we were unable to recover it. 00:25:29.898 [2024-07-12 12:02:19.196475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.196504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.196626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.196668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.196784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.196809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.196930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.196957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.197845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.197918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.198947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.198973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.199883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.199912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.200946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.200975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.201874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.201900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.202895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.202921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.203920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.203947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.204882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.204996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.899 [2024-07-12 12:02:19.205802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.899 qpair failed and we were unable to recover it. 00:25:29.899 [2024-07-12 12:02:19.205909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.205938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.206889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.206915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.207842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.207891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.208888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.208933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.209929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.209956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.210964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.210990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.211967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.211995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.212968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.212995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.213896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.213981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.214942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.214984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.215851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.215886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.216048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.216074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.216182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.216210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.216311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.900 [2024-07-12 12:02:19.216341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.900 qpair failed and we were unable to recover it. 00:25:29.900 [2024-07-12 12:02:19.216431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.216460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.216546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.216576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.216675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.216704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.216842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.216875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.216967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.216993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.217959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.217985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.218857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.218907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.219876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.219904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.220962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.220989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.221964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.221988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.222957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.222983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.223842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.223877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.224881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.224908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.225895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.225922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.226796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.226826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.901 [2024-07-12 12:02:19.227010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.901 [2024-07-12 12:02:19.227036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.901 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.227182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.227375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.227569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.227719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.227883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.227981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.228951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.228981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.229968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.229995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.230891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.230924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.231968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.231992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.232858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.232912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.233936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.233982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.234112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.234138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.234279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.234329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.234510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.234559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.234705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.234731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.234827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.234853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.235892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.235935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.236889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.236917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.237848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.237896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.238007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.238032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.238166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.238193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.238287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.238315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.902 qpair failed and we were unable to recover it. 00:25:29.902 [2024-07-12 12:02:19.238403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.902 [2024-07-12 12:02:19.238431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.238556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.238584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.238675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.238702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.238806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.238838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.238982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.239911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.239992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.240900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.240944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.241877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.241992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.242903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.242931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.243914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.243941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.244924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.244949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.245886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.245999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.246918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.246944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.247877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.247920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.248063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.248172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.248309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.248428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.903 [2024-07-12 12:02:19.248583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.903 qpair failed and we were unable to recover it. 00:25:29.903 [2024-07-12 12:02:19.248701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.248729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.248881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.248920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.249904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.249929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.250944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.250970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.251966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.251992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.252888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.252914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.253910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.253936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.254879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.254918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.255880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.255923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.256870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.256914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.257906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.257931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.258851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.258889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.904 qpair failed and we were unable to recover it. 00:25:29.904 [2024-07-12 12:02:19.259939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.904 [2024-07-12 12:02:19.259965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.260921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.260961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.261972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.261996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.262881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.262920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.263100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.263339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.263514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.263676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.263857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.263991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.264942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.264969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.265873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.265902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.266856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.266998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.267037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.267214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.267266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.267439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.267488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.267649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.267697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.267818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.267844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.267968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.268900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.268989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.269905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.269931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.270832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.270987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.271944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.271968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.272068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.272095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.272195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.905 [2024-07-12 12:02:19.272221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.905 qpair failed and we were unable to recover it. 00:25:29.905 [2024-07-12 12:02:19.272348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.272375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.272469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.272495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.272590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.272618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.272760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.272799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.272926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.272954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.273890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.273915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.274963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.274994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.275850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.275883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.276840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.276990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.277853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.277987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.278924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.278949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.279938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.279964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.280881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.280908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.281828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.281971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.282972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.282997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.283086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.283110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.283247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.283274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.906 [2024-07-12 12:02:19.283422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.906 [2024-07-12 12:02:19.283449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.906 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.283541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.283568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.283686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.283714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.283900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.283955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.284939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.284965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.285110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.285297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.285477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.285674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.285840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.285975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.286964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.286989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.287897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.287926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.288877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.288920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.289948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.289987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.290856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.290905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.291942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.291993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.292904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.292950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.293087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.293301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.293510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.293718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.293837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.293979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.294111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.294310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.294512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.294669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.294885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.294924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.907 qpair failed and we were unable to recover it. 00:25:29.907 [2024-07-12 12:02:19.295886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.907 [2024-07-12 12:02:19.295911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.296902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.296942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.297956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.297982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.298927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.298955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.299950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.299977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.300822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.300871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.301870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.301896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.302971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.302998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.303118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.303144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.303273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.303302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.303423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.303479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.303597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.908 [2024-07-12 12:02:19.303626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:29.908 qpair failed and we were unable to recover it. 00:25:29.908 [2024-07-12 12:02:19.303733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.303761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.303881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.303909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.304828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.304881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.305034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.305072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.305191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.305220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.186 [2024-07-12 12:02:19.305350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.186 [2024-07-12 12:02:19.305378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.186 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.305490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.305538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.305692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.305740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.305857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.305889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.305976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.306847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.306996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.307132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.307236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.307388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.307535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.307911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.307939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.308908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.308935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.309140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.309358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.309559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.309703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.309825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.309959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.310973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.310999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.187 qpair failed and we were unable to recover it. 00:25:30.187 [2024-07-12 12:02:19.311739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.187 [2024-07-12 12:02:19.311777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.311899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.311927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.312879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.312905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.313889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.313916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.314882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.314908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.315915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.315942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.316891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.316921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.317042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.317067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.317143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.317168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.188 [2024-07-12 12:02:19.317281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.188 [2024-07-12 12:02:19.317306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.188 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.317418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.317444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.317526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.317551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.317670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.317695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.317807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.317832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.317930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.317956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.318846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.318885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.319878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.319917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.320913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.320997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.321958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.321988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.322119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.322146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.322327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.322375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.322569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.322627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.322781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.322810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.189 [2024-07-12 12:02:19.322940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.189 [2024-07-12 12:02:19.322968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.189 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.323877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.323904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.324901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.324927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.325957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.325982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.326922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.326961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.327955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.327994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.328909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.328936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.190 qpair failed and we were unable to recover it. 00:25:30.190 [2024-07-12 12:02:19.329039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.190 [2024-07-12 12:02:19.329082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.329941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.329969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.330838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.330871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.331928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.331955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.332941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.332967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.333950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.333976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.334073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.334099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.334205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.334231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.334337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.191 [2024-07-12 12:02:19.334366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.191 qpair failed and we were unable to recover it. 00:25:30.191 [2024-07-12 12:02:19.334482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.334511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.334607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.334650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.334847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.334886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.334979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.335889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.335933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.336855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.336906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.337831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.337974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.338973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.338999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.339896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.339923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.340005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.340030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.340121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.340148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.340247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.340276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.340394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.192 [2024-07-12 12:02:19.340423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.192 qpair failed and we were unable to recover it. 00:25:30.192 [2024-07-12 12:02:19.340554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.340598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.340749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.340777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.340862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.340910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.341964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.341990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.342915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.342941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.343915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.343942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.344909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.344935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.345889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.345915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.346008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.346034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.346185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.346211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.346313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.346342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.346429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.346458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.193 [2024-07-12 12:02:19.346558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.193 [2024-07-12 12:02:19.346588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.193 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.346680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.346709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.346833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.346861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.347827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.347856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.348917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.348943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.349881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.349925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.350960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.350985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.351122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.351267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.351463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.351619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.351850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.351988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.352013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.352120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.352161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.352295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.352320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.352431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.352457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.352619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.194 [2024-07-12 12:02:19.352647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.194 qpair failed and we were unable to recover it. 00:25:30.194 [2024-07-12 12:02:19.352781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.352806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.352895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.352922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.353893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.353985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.354971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.354996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.355876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.355901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.356909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.356936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.195 [2024-07-12 12:02:19.357696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.195 [2024-07-12 12:02:19.357721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.195 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.357839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.357870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.357976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.358890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.358915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.359862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.359910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.360824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.360853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.361907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.361948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.362963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.362989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.363099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.363124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.363236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.363261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.363371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.363399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.363523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.363551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.196 qpair failed and we were unable to recover it. 00:25:30.196 [2024-07-12 12:02:19.363642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.196 [2024-07-12 12:02:19.363671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.363799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.363824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.363943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.363969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.364840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.364873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.365838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.365882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.366954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.366980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.367877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.367903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.368932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.368958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.369072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.369184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.369289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.369415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.197 [2024-07-12 12:02:19.369553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.197 qpair failed and we were unable to recover it. 00:25:30.197 [2024-07-12 12:02:19.369674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.369699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.369831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.369858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.369973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.369998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.370877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.370903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.371955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.371981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.372895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.372921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.373896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.373990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.374916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.374944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.198 [2024-07-12 12:02:19.375791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.198 [2024-07-12 12:02:19.375817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.198 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.375902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.375927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.376947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.376972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.377883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.377992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.378914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.378940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.379908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.379934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.380887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.199 [2024-07-12 12:02:19.380912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.199 qpair failed and we were unable to recover it. 00:25:30.199 [2024-07-12 12:02:19.381014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.381856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.381890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.382897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.382996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.383953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.383981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.384973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.384999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.385859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.385983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.200 qpair failed and we were unable to recover it. 00:25:30.200 [2024-07-12 12:02:19.386902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.200 [2024-07-12 12:02:19.386943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.387916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.387948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.388923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.388952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.389901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.389927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.390878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.390917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.391852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.391999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.392884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.201 [2024-07-12 12:02:19.392989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.201 [2024-07-12 12:02:19.393014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.201 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.393953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.393979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.394903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.394993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.395853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.395885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.396961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.396987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.397892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.397991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.398824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.398863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.202 qpair failed and we were unable to recover it. 00:25:30.202 [2024-07-12 12:02:19.399012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.202 [2024-07-12 12:02:19.399049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.399954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.399984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.400914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.400941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.401854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.401976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.402889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.402944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.403844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.403879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.203 [2024-07-12 12:02:19.404742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.203 qpair failed and we were unable to recover it. 00:25:30.203 [2024-07-12 12:02:19.404859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.404900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.404990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.405132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.405348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.405489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.405664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.405851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.405890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.406832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.406913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.407823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.407981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.408818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.408976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.409903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.409931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.410021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.410048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.410155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.410183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.410293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.410320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.204 qpair failed and we were unable to recover it. 00:25:30.204 [2024-07-12 12:02:19.410437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.204 [2024-07-12 12:02:19.410463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.410595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.410638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.410780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.410806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.410899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.410925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.411901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.411930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.412899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.412926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.413880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.413906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.414965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.414991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.415851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.415985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.416010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.416125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.416150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.416305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-07-12 12:02:19.416332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.205 qpair failed and we were unable to recover it. 00:25:30.205 [2024-07-12 12:02:19.416474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.416499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.416586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.416612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.416721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.416765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.416905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.416931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.417896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.417934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.418885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.418996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.419887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.419935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.420958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.420997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.421971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-12 12:02:19.421997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.206 qpair failed and we were unable to recover it. 00:25:30.206 [2024-07-12 12:02:19.422112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.422255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.422408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.422583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.422716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.422912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.422939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.423827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.423855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.424912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.424939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.425895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.425921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.426884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.426982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.427852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.427898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.428056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.428094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.428228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.428257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.428436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.428487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.428671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.207 [2024-07-12 12:02:19.428719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.207 qpair failed and we were unable to recover it. 00:25:30.207 [2024-07-12 12:02:19.428857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.428891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.428992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.429947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.429973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.430837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.430883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.431861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.431893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.432895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.432922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.433882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.208 [2024-07-12 12:02:19.433908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.208 qpair failed and we were unable to recover it. 00:25:30.208 [2024-07-12 12:02:19.434018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.434958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.434990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.435122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.435150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.435333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.435382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.435505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.435560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.435681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.435710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.435810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.435853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.436882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.436927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.437903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.437932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.438882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.438979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.439124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.439302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.439469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.439713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.439862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.439907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.440077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.440236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.440370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.440552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.209 [2024-07-12 12:02:19.440708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.209 qpair failed and we were unable to recover it. 00:25:30.209 [2024-07-12 12:02:19.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.440895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.440985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.441886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.441912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.442847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.442889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.443871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.443900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.444892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.444930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.445952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.445978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.210 qpair failed and we were unable to recover it. 00:25:30.210 [2024-07-12 12:02:19.446720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.210 [2024-07-12 12:02:19.446745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.446862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.446893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.446980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.447895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.447921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.448895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.448938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.449902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.449997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.450883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.450994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.451916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.451942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.452021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.452047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.452137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.452162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.211 [2024-07-12 12:02:19.452291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.211 [2024-07-12 12:02:19.452316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.211 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.452401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.452427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.452547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.452572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.452662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.452688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.452892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.452926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.453900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.453984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.212 qpair failed and we were unable to recover it. 00:25:30.212 [2024-07-12 12:02:19.454969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.212 [2024-07-12 12:02:19.454994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.455909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.455949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.456886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.456988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.457882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.457918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.458005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.458031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.458115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.458144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.458234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.458259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.458375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.458401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.213 qpair failed and we were unable to recover it. 00:25:30.213 [2024-07-12 12:02:19.458477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.213 [2024-07-12 12:02:19.458502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.458585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.458612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.458729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.458755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.458880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.458916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.459971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.459997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.460931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.460962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.461957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.461983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.462940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.462968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.214 [2024-07-12 12:02:19.463883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.214 [2024-07-12 12:02:19.463919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.214 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.464881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.464908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.465962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.465990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.466921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.466948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.467904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.467931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.468909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.468936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.469053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.469080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.469195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.469221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.215 qpair failed and we were unable to recover it. 00:25:30.215 [2024-07-12 12:02:19.469309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.215 [2024-07-12 12:02:19.469335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.469427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.469455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.469599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.469626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.469729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.469769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.469874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.469903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.469989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.470945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.470973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.471899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.471926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.472909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.472994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.473862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.473988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.216 qpair failed and we were unable to recover it. 00:25:30.216 [2024-07-12 12:02:19.474764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.216 [2024-07-12 12:02:19.474793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.474895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.474922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.475934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.475965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.476960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.476988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.477810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.477854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.478950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.478976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.479892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.217 [2024-07-12 12:02:19.479923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.217 qpair failed and we were unable to recover it. 00:25:30.217 [2024-07-12 12:02:19.480038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.480901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.480993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.481944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.481974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.482878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.482908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.483912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.483952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.484957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.484984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.485855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.485887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.218 qpair failed and we were unable to recover it. 00:25:30.218 [2024-07-12 12:02:19.486030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.218 [2024-07-12 12:02:19.486058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.486887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.486927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.487877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.487905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.488861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.488981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.489908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.489994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.490882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.490995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.219 [2024-07-12 12:02:19.491799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.219 qpair failed and we were unable to recover it. 00:25:30.219 [2024-07-12 12:02:19.491919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.491946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.492954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.492979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.493894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.493987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.494819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.494845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.495882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.495909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.496854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.496890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.497113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.497233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.497360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.497494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.220 [2024-07-12 12:02:19.497603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.220 qpair failed and we were unable to recover it. 00:25:30.220 [2024-07-12 12:02:19.497723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.497749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.497834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.497860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.497960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.497987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.498963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.498990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.499918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.499944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.500842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.500991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.501856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.501979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.502006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.502124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.502150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.502240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.502265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.502383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.502408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.502526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.221 [2024-07-12 12:02:19.502551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.221 qpair failed and we were unable to recover it. 00:25:30.221 [2024-07-12 12:02:19.502640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.502665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.502781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.502806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.502897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.502925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.503906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.503992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.504955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.504982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.505953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.505979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.506914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.506940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.507022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.507048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.507167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-07-12 12:02:19.507193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.222 qpair failed and we were unable to recover it. 00:25:30.222 [2024-07-12 12:02:19.507307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.507333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.507420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.507446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.507544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.507570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.507765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.507794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.507897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.507946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.508969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.508995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.509850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.509885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.510881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.510907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.511903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.511930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.512047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.512157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.512307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.512421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.223 [2024-07-12 12:02:19.512558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.223 qpair failed and we were unable to recover it. 00:25:30.223 [2024-07-12 12:02:19.512652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.512678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.512774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.512803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.512939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.512967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.513890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.513987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.514903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.514996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.515860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.515895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.516831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.516859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.517857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.517990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.518017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.224 [2024-07-12 12:02:19.518101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.224 [2024-07-12 12:02:19.518127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.224 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.518253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.518296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.518451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.518480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.518609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.518638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.518735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.518764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.518887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.518931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.519908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.519952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.520882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.520916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.521847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.521880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.522896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.522987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.225 qpair failed and we were unable to recover it. 00:25:30.225 [2024-07-12 12:02:19.523925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.225 [2024-07-12 12:02:19.523951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.524854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.524889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.525841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.525875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.526862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.526909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.527882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.527917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-07-12 12:02:19.528859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.226 [2024-07-12 12:02:19.528891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.528973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.528998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.529916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.529942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.530841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.530875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.531894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.531932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.532905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.532997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.533963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.533989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.534075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.534119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.534241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.534269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.534393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.534422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.534550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.534579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.227 [2024-07-12 12:02:19.534687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.227 [2024-07-12 12:02:19.534715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.227 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.534823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.534849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.534981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.535926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.535953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.536854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.536890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.537900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.537927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.538882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.538909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.539906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.539993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.540018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.228 [2024-07-12 12:02:19.540138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.228 [2024-07-12 12:02:19.540163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.228 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.540924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.540950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.541889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.541917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.542967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.542994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.543906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.543996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.544967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.544994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.545110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.545136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.545249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.545275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.545358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.545383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.545465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.545491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.229 [2024-07-12 12:02:19.545583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.229 [2024-07-12 12:02:19.545609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.229 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.545749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.545775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.545858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.545890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.545985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.546848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.546885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.547836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.547883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.548833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.548978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.549903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.549929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.230 [2024-07-12 12:02:19.550651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.230 [2024-07-12 12:02:19.550677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.230 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.550779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.550805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.550888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.550914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.551895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.551981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.552875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.552902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.553909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.553994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.554020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.554131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.554156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.554235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.231 [2024-07-12 12:02:19.554260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.231 qpair failed and we were unable to recover it. 00:25:30.231 [2024-07-12 12:02:19.554356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.554396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.554516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.554543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.554633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.554659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.554770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.554797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.554887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.554914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.555885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.555913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.556962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.556988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.557906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.557994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.558890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.558916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.559015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.559041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.559129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.559155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.559296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.559322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.559438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.559463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.232 qpair failed and we were unable to recover it. 00:25:30.232 [2024-07-12 12:02:19.559577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.232 [2024-07-12 12:02:19.559602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.559713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.559739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.559882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.559909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.560938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.560964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.561939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.561966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.562908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.562935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.563965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.563991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.233 [2024-07-12 12:02:19.564754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.233 qpair failed and we were unable to recover it. 00:25:30.233 [2024-07-12 12:02:19.564841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.564873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.564960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.564986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.565849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.565977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.566928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.566954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.567872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.567992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.568907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.568948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.234 [2024-07-12 12:02:19.569678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.234 [2024-07-12 12:02:19.569704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.234 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.569794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.569820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.569910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.569937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.570934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.570960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.571895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.571922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.572940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.572970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.573906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.573988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.574018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.235 [2024-07-12 12:02:19.574145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.235 [2024-07-12 12:02:19.574171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.235 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.574937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.574964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.575861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.575897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.576860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.576972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.577930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.577957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.578932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.578959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.579040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.579066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.579149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.579175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.579284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.579311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.579439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.579472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.236 [2024-07-12 12:02:19.579587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.236 [2024-07-12 12:02:19.579613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.236 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.579749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.579774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.579886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.579924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.580825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.580854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.581816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.581845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.582915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.582941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.583899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.583980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.584935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.584964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.585076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.585102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.585190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.585215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.585327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.585352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.585468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.585498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.237 qpair failed and we were unable to recover it. 00:25:30.237 [2024-07-12 12:02:19.585655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.237 [2024-07-12 12:02:19.585684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.585814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.585843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.585985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.586878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.586993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.587948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.587973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.588968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.588997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.589890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.589981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.590887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.590999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.591025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.591168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.591198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.591332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.238 [2024-07-12 12:02:19.591363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.238 qpair failed and we were unable to recover it. 00:25:30.238 [2024-07-12 12:02:19.591499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.591526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.591632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.591657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.591760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.591789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.591949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.591975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.592905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.592932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.593923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.593949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.594935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.594967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.595908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.595934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.596908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.239 [2024-07-12 12:02:19.596944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.239 qpair failed and we were unable to recover it. 00:25:30.239 [2024-07-12 12:02:19.597073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.597259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.597437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.597602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.597739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.597854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.597886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.598917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.598943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.599960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.599987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.600831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.600857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.601848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.601880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.602004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.602031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.602208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.602234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.602348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.240 [2024-07-12 12:02:19.602390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.240 qpair failed and we were unable to recover it. 00:25:30.240 [2024-07-12 12:02:19.602499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.602530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.602666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.602691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.602810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.602835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.603892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.603989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.604880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.604907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.605879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.605905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.606879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.606905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.607838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.607881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.608006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.241 [2024-07-12 12:02:19.608032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.241 qpair failed and we were unable to recover it. 00:25:30.241 [2024-07-12 12:02:19.608168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.608328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.608468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.608630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.608787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.608961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.608988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.609930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.609957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.610937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.610964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.611887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.611913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.612959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.612984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.613954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.613981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.614089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.242 [2024-07-12 12:02:19.614114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.242 qpair failed and we were unable to recover it. 00:25:30.242 [2024-07-12 12:02:19.614220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.614379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.614521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.614684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.614815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.614945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.614971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.615878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.615931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.616915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.616943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.617836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.617873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.618920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.618946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.619860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.619985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.620011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.620151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.620180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.620314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-12 12:02:19.620340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.243 qpair failed and we were unable to recover it. 00:25:30.243 [2024-07-12 12:02:19.620456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.620482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.620571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.620597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.620708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.620736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.620837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.620872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.620985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.621900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.621988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.622940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.622967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.623939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.623965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.624953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.624992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.625101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.625127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.244 [2024-07-12 12:02:19.625256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-12 12:02:19.625285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.244 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.625384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.625410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.625504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.625530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.625651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.625680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.625787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.625813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.625959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.625985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.626847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.626910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.627889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.627916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.628837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.628863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.629917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.629946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.630891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.630921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.245 [2024-07-12 12:02:19.631041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-12 12:02:19.631066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.245 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.631953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.631980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.632927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.632954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.633844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.633878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.634846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.634981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.635820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.635991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.636211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.636373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.636534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.636706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.246 qpair failed and we were unable to recover it. 00:25:30.246 [2024-07-12 12:02:19.636874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.246 [2024-07-12 12:02:19.636928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.637878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.637910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.638821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.638999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.639780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.639968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.640922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.640948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.641870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.641897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.642855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.642985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.247 [2024-07-12 12:02:19.643012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.247 qpair failed and we were unable to recover it. 00:25:30.247 [2024-07-12 12:02:19.643119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.643875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.643993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.644970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.644999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.645931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.645957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.646912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.646940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.647918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.647945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.648058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.248 [2024-07-12 12:02:19.648085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.248 qpair failed and we were unable to recover it. 00:25:30.248 [2024-07-12 12:02:19.648196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.648323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.648500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.648661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.648825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.648972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.648999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.649947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.649973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.650920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.650946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.651941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.651968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.652859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.652909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.653955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.653981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.654105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.654132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.654227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.654252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.654359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.654389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.249 [2024-07-12 12:02:19.654491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.249 [2024-07-12 12:02:19.654517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.249 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.654632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.654658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.654752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.654778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.654892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.654927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.655963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.655990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.656882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.656909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.657845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.657987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.658789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.658833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.250 [2024-07-12 12:02:19.659874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.250 [2024-07-12 12:02:19.659900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.250 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.659987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.660917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.660946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.661040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.661071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.661161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.661188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.661327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.661354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.251 [2024-07-12 12:02:19.661469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.251 [2024-07-12 12:02:19.661495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.251 qpair failed and we were unable to recover it. 00:25:30.527 [2024-07-12 12:02:19.661648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.527 [2024-07-12 12:02:19.661675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.527 qpair failed and we were unable to recover it. 00:25:30.527 [2024-07-12 12:02:19.661822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.527 [2024-07-12 12:02:19.661848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.527 qpair failed and we were unable to recover it. 00:25:30.527 [2024-07-12 12:02:19.661985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.527 [2024-07-12 12:02:19.662030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.527 qpair failed and we were unable to recover it. 00:25:30.527 [2024-07-12 12:02:19.662160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.527 [2024-07-12 12:02:19.662189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.527 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.662286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.662316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.662440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.662488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.662617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.662646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.662790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.662816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.662921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.662947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.663841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.663876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.664893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.664921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.665858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.665894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.666940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.666967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.667846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.667990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.668017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.528 qpair failed and we were unable to recover it. 00:25:30.528 [2024-07-12 12:02:19.668104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.528 [2024-07-12 12:02:19.668129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.668267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.668296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.668425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.668454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.668585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.668613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.668741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.668770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.668906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.668945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.669847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.669977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.670828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.670977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.671185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.671393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.671561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.671751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.671906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.671936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.672963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.672990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.673125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.673169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.673331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.529 [2024-07-12 12:02:19.673375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.529 qpair failed and we were unable to recover it. 00:25:30.529 [2024-07-12 12:02:19.673468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.673494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.673605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.673630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.673772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.673798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.673944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.673970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.674933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.674972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.675070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.675096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.675182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.675223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.675406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.675453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.675686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.675740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.675871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.675900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.676796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.676978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.677888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.530 [2024-07-12 12:02:19.677913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.530 qpair failed and we were unable to recover it. 00:25:30.530 [2024-07-12 12:02:19.678031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.678902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.678927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.679859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.679980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.680893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.680980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.681957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.681986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.682077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.682105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.682221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.682249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.682403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.682447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.682583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.531 [2024-07-12 12:02:19.682627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.531 qpair failed and we were unable to recover it. 00:25:30.531 [2024-07-12 12:02:19.682744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.682770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.682859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.682890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.683876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.683904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.684873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.684994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.685906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.685932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.686962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.686988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.687111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.687139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.687293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.532 [2024-07-12 12:02:19.687341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.532 qpair failed and we were unable to recover it. 00:25:30.532 [2024-07-12 12:02:19.687481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.687524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.687635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.687662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.687796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.687822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.687918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.687943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.688845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.688879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.689940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.689965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.533 [2024-07-12 12:02:19.690755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.533 [2024-07-12 12:02:19.690799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.533 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.690941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.690966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.691885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.691999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.692911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.692936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.693876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.693919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.694914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.534 [2024-07-12 12:02:19.694939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.534 qpair failed and we were unable to recover it. 00:25:30.534 [2024-07-12 12:02:19.695033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.695887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.695913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.696956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.696981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.697973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.697998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.698884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.698927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.535 [2024-07-12 12:02:19.699041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.535 [2024-07-12 12:02:19.699067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.535 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.699864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.699913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.700835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.700984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.701886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.701977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.702948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.702973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.703090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.703115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.703253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.703280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.703391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.536 [2024-07-12 12:02:19.703416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.536 qpair failed and we were unable to recover it. 00:25:30.536 [2024-07-12 12:02:19.703528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.703553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.703635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.703660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.703790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.703818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.703953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.703979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.704964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.704989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.705878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.705993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.706960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.706985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.707066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.707091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.707186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.537 [2024-07-12 12:02:19.707211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.537 qpair failed and we were unable to recover it. 00:25:30.537 [2024-07-12 12:02:19.707300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.707325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.707417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.707442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.707553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.707578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.707694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.707719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.707829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.707854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.707980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.708966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.708991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.709857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.709975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.710836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.710988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.711014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.538 [2024-07-12 12:02:19.711132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.538 [2024-07-12 12:02:19.711158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.538 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.711942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.711971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.712893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.712919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.713919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.713945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.714036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.539 [2024-07-12 12:02:19.714061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.539 qpair failed and we were unable to recover it. 00:25:30.539 [2024-07-12 12:02:19.714148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.714258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.714387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.714552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.714758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.714902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.714929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.715905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.715986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.716924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.716959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.717964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.717989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.718077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.718103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.540 [2024-07-12 12:02:19.718195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.540 [2024-07-12 12:02:19.718220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.540 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.718360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.718388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.718486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.718514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.718647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.718675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.718832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.718886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.718983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.719933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.719959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.720937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.720965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.721952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.721993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.722126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.541 [2024-07-12 12:02:19.722154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.541 qpair failed and we were unable to recover it. 00:25:30.541 [2024-07-12 12:02:19.722303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.722331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.722427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.722458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.722557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.722586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.722728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.722754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.722841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.722873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.722979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.723895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.723986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.724828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.724856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.725845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.725881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.726016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.726042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.726156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-12 12:02:19.726182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.542 qpair failed and we were unable to recover it. 00:25:30.542 [2024-07-12 12:02:19.726262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.726419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.726545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.726688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.726857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.726966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.726991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.727900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.727941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.728846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.728877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.729974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.729999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.543 [2024-07-12 12:02:19.730083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.543 [2024-07-12 12:02:19.730107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.543 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.730886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.730913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.731898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.731923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.732878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.732918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.733921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.544 [2024-07-12 12:02:19.733947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.544 qpair failed and we were unable to recover it. 00:25:30.544 [2024-07-12 12:02:19.734062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.734931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.734956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.735847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.735878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.736889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.736916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.545 qpair failed and we were unable to recover it. 00:25:30.545 [2024-07-12 12:02:19.737029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.545 [2024-07-12 12:02:19.737054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.737886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.737911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.738946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.738970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.739878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.739923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.740012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.740038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.740154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.740179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.740265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.740290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.740381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.740407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.546 qpair failed and we were unable to recover it. 00:25:30.546 [2024-07-12 12:02:19.740522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.546 [2024-07-12 12:02:19.740548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.740636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.740660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.740750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.740775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.740890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.740915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.741891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.741982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.742881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.742907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.743858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.743988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.744014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.744107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.547 [2024-07-12 12:02:19.744133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.547 qpair failed and we were unable to recover it. 00:25:30.547 [2024-07-12 12:02:19.744218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.744890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.744986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.745920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.745948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.746923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.746949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.548 [2024-07-12 12:02:19.747862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.548 qpair failed and we were unable to recover it. 00:25:30.548 [2024-07-12 12:02:19.747983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.748858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.748979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.749945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.749984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.750940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.750967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.751071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.751177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.751288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.751393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.549 [2024-07-12 12:02:19.751506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.549 qpair failed and we were unable to recover it. 00:25:30.549 [2024-07-12 12:02:19.751590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.751615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.751731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.751757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.751854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.751885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.751987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.752885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.752984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.753893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.753985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.754908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.754990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.755016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.755095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.755120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.550 [2024-07-12 12:02:19.755200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.550 [2024-07-12 12:02:19.755225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.550 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.755345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.755370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.755530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.755559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.755648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.755673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.755793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.755818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.755934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.755960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.756932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.756957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.757916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.757941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.551 [2024-07-12 12:02:19.758023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.551 [2024-07-12 12:02:19.758048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.551 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.758887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.758998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.759956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.759981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.760890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.760984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.552 [2024-07-12 12:02:19.761687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.552 qpair failed and we were unable to recover it. 00:25:30.552 [2024-07-12 12:02:19.761787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.761817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.761970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.761998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.762947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.762974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.763898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.763939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.764893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.764982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.765008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.765096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.765123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.765217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.765243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.765331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.553 [2024-07-12 12:02:19.765357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.553 qpair failed and we were unable to recover it. 00:25:30.553 [2024-07-12 12:02:19.765446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.765472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.765572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.765611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.765752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.765781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.765917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.765944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.766883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.766996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.767878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.767990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.768862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.768980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.769005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.554 [2024-07-12 12:02:19.769087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.554 [2024-07-12 12:02:19.769111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.554 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.769898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.769925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.770903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.770990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.771880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.771907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.555 [2024-07-12 12:02:19.772747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.555 qpair failed and we were unable to recover it. 00:25:30.555 [2024-07-12 12:02:19.772859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.772891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.772987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.773893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.773996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.774809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.774970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.775893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.775938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.776025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.776051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.776137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.776164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.776291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.776320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.776455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.776481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.556 [2024-07-12 12:02:19.776594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.556 [2024-07-12 12:02:19.776625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.556 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.776771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.776802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.776930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.776956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.777946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.777971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.778817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.778844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.779657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1357bb0 (9): Bad file descriptor 00:25:30.557 [2024-07-12 12:02:19.779828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.557 [2024-07-12 12:02:19.779874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.557 qpair failed and we were unable to recover it. 00:25:30.557 [2024-07-12 12:02:19.780003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.780956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.780984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.781829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.781876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.782878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.783948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.783976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.784113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.558 [2024-07-12 12:02:19.784141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.558 qpair failed and we were unable to recover it. 00:25:30.558 [2024-07-12 12:02:19.784266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.784294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.784395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.784423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.784540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.784568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.784715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.784744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.784926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.784952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.785837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.785983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.786805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.786845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.787919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.787945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.788090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.788117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.788240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.788266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.788388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.788414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.559 [2024-07-12 12:02:19.788531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.559 [2024-07-12 12:02:19.788558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.559 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.788679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.788707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.788802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.788833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.788935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.788962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.789895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.789992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.790926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.790951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.791946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.791972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.792064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.792089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.792205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.792233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.792342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.560 [2024-07-12 12:02:19.792367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.560 qpair failed and we were unable to recover it. 00:25:30.560 [2024-07-12 12:02:19.792480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.792513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.792604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.792636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.792761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.792787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.792886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.792925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.793894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.793926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.794929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.794956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.795894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.795921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.796035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.796061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.796187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.796212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.796308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.796334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.561 [2024-07-12 12:02:19.796429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.561 [2024-07-12 12:02:19.796455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.561 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.796572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.796598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.796745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.796771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.796897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.796931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.797877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.797903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.798902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.798930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.799964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.799988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.800095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.800119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.800274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.800299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.800410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.800437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.562 [2024-07-12 12:02:19.800570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.562 [2024-07-12 12:02:19.800613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.562 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.800776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.800805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.800920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.800946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.801973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.801998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.802885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.802911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.803880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.803983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.804008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.804097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.804122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.804238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.804264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.804377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.804405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.804515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.563 [2024-07-12 12:02:19.804540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.563 qpair failed and we were unable to recover it. 00:25:30.563 [2024-07-12 12:02:19.804686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.804714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.804804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.804832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.804964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.805889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.805921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.806928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.806954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.807892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.807922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.564 qpair failed and we were unable to recover it. 00:25:30.564 [2024-07-12 12:02:19.808009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.564 [2024-07-12 12:02:19.808035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.808863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.808899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.809883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.809922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.810828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.810854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.811917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.811944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.812076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.565 [2024-07-12 12:02:19.812120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.565 qpair failed and we were unable to recover it. 00:25:30.565 [2024-07-12 12:02:19.812278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.812322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.812433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.812459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.812578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.812604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.812728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.812755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.812877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.812905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.813850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.813995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.814184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.814417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.814572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.814722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.814887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.814923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.815894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.815934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.816087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.816214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.816411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.816555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.566 [2024-07-12 12:02:19.816711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.566 qpair failed and we were unable to recover it. 00:25:30.566 [2024-07-12 12:02:19.816833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.816876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.817889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.817925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.818903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.818998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.819846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.819879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.820948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.820978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.567 qpair failed and we were unable to recover it. 00:25:30.567 [2024-07-12 12:02:19.821085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.567 [2024-07-12 12:02:19.821113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.821267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.821296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.821390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.821433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.821598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.821627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.821732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.821761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.821923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.821962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.822920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.822946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.823877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.823902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.824901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.824928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.825063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.825107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.825247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.825292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.568 [2024-07-12 12:02:19.825386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.568 [2024-07-12 12:02:19.825413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.568 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.825501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.825528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.825642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.825668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.825813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.825839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.826858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.826894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.827948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.827974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.828837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.828890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.829923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.829962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.569 [2024-07-12 12:02:19.830088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.569 [2024-07-12 12:02:19.830116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.569 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.830259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.830303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.830437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.830481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.830616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.830646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.830801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.830828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.830919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.830962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.831861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.831895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.832838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.832864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.570 [2024-07-12 12:02:19.833722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.570 [2024-07-12 12:02:19.833747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.570 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.833883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.833932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.834930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.834959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.835851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.835958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.836875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.836991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.837964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.837990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.838108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.838134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.571 qpair failed and we were unable to recover it. 00:25:30.571 [2024-07-12 12:02:19.838270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.571 [2024-07-12 12:02:19.838297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.838398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.838425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.838516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.838544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.838665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.838694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.838791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.838818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.838944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.838971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.839862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.839989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.840959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.840988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.841953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.841979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.842092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.842274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.842387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.842499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.572 [2024-07-12 12:02:19.842668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.572 qpair failed and we were unable to recover it. 00:25:30.572 [2024-07-12 12:02:19.842797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.842841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.842978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.843847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.843960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.844903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.844928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.845902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.845929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.846949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.573 [2024-07-12 12:02:19.846977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.573 qpair failed and we were unable to recover it. 00:25:30.573 [2024-07-12 12:02:19.847097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.847938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.847964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.848904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.848930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.849823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.849960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.850906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.850936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.851085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.851112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.851204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.851231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.851343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.851369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.574 qpair failed and we were unable to recover it. 00:25:30.574 [2024-07-12 12:02:19.851510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.574 [2024-07-12 12:02:19.851537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.851628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.851654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.851747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.851773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.851871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.851898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.852818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.852961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.853902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.853929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.854921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.854997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.855023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.575 [2024-07-12 12:02:19.855172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.575 [2024-07-12 12:02:19.855198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.575 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.855927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.855953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.856894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.856920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.857950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.857976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.858899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.858925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.859039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.859064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.859191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.859216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.859309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.859335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.859420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.859445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.576 qpair failed and we were unable to recover it. 00:25:30.576 [2024-07-12 12:02:19.859584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.576 [2024-07-12 12:02:19.859611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.859703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.859729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.859830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.859875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.859974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.860887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.860913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.861822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.861963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.862953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.862979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.863949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.577 [2024-07-12 12:02:19.863977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.577 qpair failed and we were unable to recover it. 00:25:30.577 [2024-07-12 12:02:19.864095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.864889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.864924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.865923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.865961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.866901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.866989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.867944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.867984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.868134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.868293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.868411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.868521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.578 [2024-07-12 12:02:19.868634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.578 qpair failed and we were unable to recover it. 00:25:30.578 [2024-07-12 12:02:19.868727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.868753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.868842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.868875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.869845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.869886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.870949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.870976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.871872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.871900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.872953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.872979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.873122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.873243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.873386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.873537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.579 [2024-07-12 12:02:19.873658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.579 qpair failed and we were unable to recover it. 00:25:30.579 [2024-07-12 12:02:19.873743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.873769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.873863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.873902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.873981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.874938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.874966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.875859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.875890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.876965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.876991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.580 qpair failed and we were unable to recover it. 00:25:30.580 [2024-07-12 12:02:19.877751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.580 [2024-07-12 12:02:19.877777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.877914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.877941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.878859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.878970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.879874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.879912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.880900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.880927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.881927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.881954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.581 [2024-07-12 12:02:19.882818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.581 qpair failed and we were unable to recover it. 00:25:30.581 [2024-07-12 12:02:19.882949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.882975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.883970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.883997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.884883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.884922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.885929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.885955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.886950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.886977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.582 [2024-07-12 12:02:19.887830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.582 [2024-07-12 12:02:19.887856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.582 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.887989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.888933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.888959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.889909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.889935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.890878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.890906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.891943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.891980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.892873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.892995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.893020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.893100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.893124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.583 qpair failed and we were unable to recover it. 00:25:30.583 [2024-07-12 12:02:19.893236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.583 [2024-07-12 12:02:19.893261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.893954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.893982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.894914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.894954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.895837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.895885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.896875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.896914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.897852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.897981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.898011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.898129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.898155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.898243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.898268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.898417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.898443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.584 qpair failed and we were unable to recover it. 00:25:30.584 [2024-07-12 12:02:19.898572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.584 [2024-07-12 12:02:19.898598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.898688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.898713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.898846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.898892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.898986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.899995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.900893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.900928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.901905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.901932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.585 [2024-07-12 12:02:19.902887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.585 [2024-07-12 12:02:19.902914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.585 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.903875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.903904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.904963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.904990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.905828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.905878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.586 [2024-07-12 12:02:19.906845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.586 qpair failed and we were unable to recover it. 00:25:30.586 [2024-07-12 12:02:19.906977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.907940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.907967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.908860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.908897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.909834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.909878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.910885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.910975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.911936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.911961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.912047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.912072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.912151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.912176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.912290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.912315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.912393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.587 [2024-07-12 12:02:19.912418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.587 qpair failed and we were unable to recover it. 00:25:30.587 [2024-07-12 12:02:19.912502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.912527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.912623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.912647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.912734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.912759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.912872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.912898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.913899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.913994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.914916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.914946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.915882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.915989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.916880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.916983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.588 [2024-07-12 12:02:19.917689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.588 [2024-07-12 12:02:19.917715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.588 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.917796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.917823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.917935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.917960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.918932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.918960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.919821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.919986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.920968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.920993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.921949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.921977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.922067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.922094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.922197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.922222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.589 [2024-07-12 12:02:19.922335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.589 [2024-07-12 12:02:19.922360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.589 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.922474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.922500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.922591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.922617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.922706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.922734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.922855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.922888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.923875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.923902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.924938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.924964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.925832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.925964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.926915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.926943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.927059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.927089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.927197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.927223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.927307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.590 [2024-07-12 12:02:19.927334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.590 qpair failed and we were unable to recover it. 00:25:30.590 [2024-07-12 12:02:19.927453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.927478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.927563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.927588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.927686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.927725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.927826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.927853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.927980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.928875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.928983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.929941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.929968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.930955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.930981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.931973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.931999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.591 qpair failed and we were unable to recover it. 00:25:30.591 [2024-07-12 12:02:19.932815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.591 [2024-07-12 12:02:19.932840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.932930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.932955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.933917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.933943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.934857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.934895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.935890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.935918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.936890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.936920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.937955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.937982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.938102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.938129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.938220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.938247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.592 qpair failed and we were unable to recover it. 00:25:30.592 [2024-07-12 12:02:19.938364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.592 [2024-07-12 12:02:19.938391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.938528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.938677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.938704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.938819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.938846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.938940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.938967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.939871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.939898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.940883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.940994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.941927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.941967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.942914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.942943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.943889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.943929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.944055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.593 [2024-07-12 12:02:19.944083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.593 qpair failed and we were unable to recover it. 00:25:30.593 [2024-07-12 12:02:19.944165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.944947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.944976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.945895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.945981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.946891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.946925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.947864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.947911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.594 [2024-07-12 12:02:19.948595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.594 qpair failed and we were unable to recover it. 00:25:30.594 [2024-07-12 12:02:19.948690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.948716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.948823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.948849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.949903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.949930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.950938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.950964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.951907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.951996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.952874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.952975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.953810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.595 [2024-07-12 12:02:19.953844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.595 qpair failed and we were unable to recover it. 00:25:30.595 [2024-07-12 12:02:19.954050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.954890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.954982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.955965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.955994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.956874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.956995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.957898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.957983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.958942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.958969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.959088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.959114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.596 qpair failed and we were unable to recover it. 00:25:30.596 [2024-07-12 12:02:19.959202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-07-12 12:02:19.959230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.959967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.959995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.960887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.960928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.961894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.961921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.962849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.962981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.963925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.963952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.597 qpair failed and we were unable to recover it. 00:25:30.597 [2024-07-12 12:02:19.964647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-07-12 12:02:19.964674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.964771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.964800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.964929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.964958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.965923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.965951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.966932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.966959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.967825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.967864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.968901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.968929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.969044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.969070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.969159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.969186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.969268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.969295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.969422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-07-12 12:02:19.969451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.598 qpair failed and we were unable to recover it. 00:25:30.598 [2024-07-12 12:02:19.969539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.969566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.969682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.969708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.969801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.969829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.969980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.970864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.970992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.971931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.971959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.972965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.972991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.973875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.973905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.599 [2024-07-12 12:02:19.974760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.599 [2024-07-12 12:02:19.974787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.599 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.974899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.974926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.975944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.975971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.976945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.976976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.977889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.977980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.978942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.978970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.979902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.979931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.980026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.980053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.980134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.980161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.980254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.980280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.600 [2024-07-12 12:02:19.980395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.600 [2024-07-12 12:02:19.980421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.600 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.980533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.980562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.980679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.980707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.980836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.980883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.980990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.981969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.981995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.982944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.982972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.983899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.983989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.984962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.984988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.985829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.601 [2024-07-12 12:02:19.985856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.601 qpair failed and we were unable to recover it. 00:25:30.601 [2024-07-12 12:02:19.986002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.986894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.986988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.987849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.987969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.988904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.988931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.989846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.989979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.602 [2024-07-12 12:02:19.990771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.602 qpair failed and we were unable to recover it. 00:25:30.602 [2024-07-12 12:02:19.990853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.990885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.990967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.990993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.991918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.991945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.992849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.992883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.993886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.993913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.994945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.994973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.995841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.995984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.996011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.603 [2024-07-12 12:02:19.996126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.603 qpair failed and we were unable to recover it. 00:25:30.603 [2024-07-12 12:02:19.996212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.996329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.996442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.996601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.996756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.996916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.996956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.997937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.997963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.998858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.998975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:19.999899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:19.999992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.000923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.000958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.001073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.001108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.604 qpair failed and we were unable to recover it. 00:25:30.604 [2024-07-12 12:02:20.001227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.604 [2024-07-12 12:02:20.001262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.001404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.001437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.001547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.001580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.001716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.001746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.001833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.001859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.001958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.001984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.002907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.002947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.605 [2024-07-12 12:02:20.003060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.605 [2024-07-12 12:02:20.003089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.605 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.003875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.003906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.878 [2024-07-12 12:02:20.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.878 [2024-07-12 12:02:20.004989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.878 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.005967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.005993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.006912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.006998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.007906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.007933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.008934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.008961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.879 qpair failed and we were unable to recover it. 00:25:30.879 [2024-07-12 12:02:20.009758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.879 [2024-07-12 12:02:20.009785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.009918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.009945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.010843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.010903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.011022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.011054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.011165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.011198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.011304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.011335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.012889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.012917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.013922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.013950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.014856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.014888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.015008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.015034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.015153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.015179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.015287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.015314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.015433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.880 [2024-07-12 12:02:20.015459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.880 qpair failed and we were unable to recover it. 00:25:30.880 [2024-07-12 12:02:20.015545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.015571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.015694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.015720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.015812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.015838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.015988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.016888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.016987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.017838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.017864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.018876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.018903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.019952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.019978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.020090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.020117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.020194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.020220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.881 qpair failed and we were unable to recover it. 00:25:30.881 [2024-07-12 12:02:20.020318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.881 [2024-07-12 12:02:20.020346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.020444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.020470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.020588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.020615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.020734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.020761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.020856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.020894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.020988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.021921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.021949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.022928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.022956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.023892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.023920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.024907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.024993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.025020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.025148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.025178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.882 qpair failed and we were unable to recover it. 00:25:30.882 [2024-07-12 12:02:20.025275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.882 [2024-07-12 12:02:20.025301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.025401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.025427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.025520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.025548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.025660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.025687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.025770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.025797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.025914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.025942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.026905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.026933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.027891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.027917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.028922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.028949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.029050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.029076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.029161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.029187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.029306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.029332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.029419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.883 [2024-07-12 12:02:20.029444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.883 qpair failed and we were unable to recover it. 00:25:30.883 [2024-07-12 12:02:20.029526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.029552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.029667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.029793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.029825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.029948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.029976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.030880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.030911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.031902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.031990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.032894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.032993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.033891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.033918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.034034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.034060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.884 [2024-07-12 12:02:20.034152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.884 [2024-07-12 12:02:20.034178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.884 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.034951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.034978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.035886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.035913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.036879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.036907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.037970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.037997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.038115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.038142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.038233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.885 [2024-07-12 12:02:20.038260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.885 qpair failed and we were unable to recover it. 00:25:30.885 [2024-07-12 12:02:20.038352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.038378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.038520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.038546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.038640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.038666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.038753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.038780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.038906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.038934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.039855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.039888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.040888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.040916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.041944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.041972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.042874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.042916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.886 [2024-07-12 12:02:20.043667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.886 [2024-07-12 12:02:20.043693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.886 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.043773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.043800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.043891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.043919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.044899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.044991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.045895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.045935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.046958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.046985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.047901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.047929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.048047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.048073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.048169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.048195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.887 [2024-07-12 12:02:20.048310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.887 [2024-07-12 12:02:20.048336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.887 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.048428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.048454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.048544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.048570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.048692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.048721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.048839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.048874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.048963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.048990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.049963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.049990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.050884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.050913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.051918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.051945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.052970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.052998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.053115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.053142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.053225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.888 [2024-07-12 12:02:20.053253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.888 qpair failed and we were unable to recover it. 00:25:30.888 [2024-07-12 12:02:20.053339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.053366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.053453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.053479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.053609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.053648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.053801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.053831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.053973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.054902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.054936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.055908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.055935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.056893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.056921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.057852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.057984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.058028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.058158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.058188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.058284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.058315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.058438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.889 [2024-07-12 12:02:20.058465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.889 qpair failed and we were unable to recover it. 00:25:30.889 [2024-07-12 12:02:20.058556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.058584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.058693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.058721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.058808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.058835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.058936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.058964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.059849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.059885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.060932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.060965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.061885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.061975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.062860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.062903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.063886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.063987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.890 [2024-07-12 12:02:20.064013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.890 qpair failed and we were unable to recover it. 00:25:30.890 [2024-07-12 12:02:20.064103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.064876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.064996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.065913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.065941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.066939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.066965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.067941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.067974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.068858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.068889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.069003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.069029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.069146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.069172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.891 qpair failed and we were unable to recover it. 00:25:30.891 [2024-07-12 12:02:20.069261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-12 12:02:20.069287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.069365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.069393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.069478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.069505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.069590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.069617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.069743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.069889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.069917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.070910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.070996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.071855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.071886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.072934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.072962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.892 [2024-07-12 12:02:20.073743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.892 qpair failed and we were unable to recover it. 00:25:30.892 [2024-07-12 12:02:20.073828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.073854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.073973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.073999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.074877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.074968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.075891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.075918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.076920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.076946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.077872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.077975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.078948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.078976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.893 [2024-07-12 12:02:20.079071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.893 [2024-07-12 12:02:20.079098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.893 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.079936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.079966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.080856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.080975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.081815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.081961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.082889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.082916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.083930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.083957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.084071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.084098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.084215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.084241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.894 [2024-07-12 12:02:20.084360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.894 [2024-07-12 12:02:20.084387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.894 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.084487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.084528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.084650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.084678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.084794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.084820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.084914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.084940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.085902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.085983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.086877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.086905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.087927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.087968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.088892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.088919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.895 [2024-07-12 12:02:20.089819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.895 qpair failed and we were unable to recover it. 00:25:30.895 [2024-07-12 12:02:20.089915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.089944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.090890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.090918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.091900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.091940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.092927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.092954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.093958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.093987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.896 [2024-07-12 12:02:20.094708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.896 [2024-07-12 12:02:20.094736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.896 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.094846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.094878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.094973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.095908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.095937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.096904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.096945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.097886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.097926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.098939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.098967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.099825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.099859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.100003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.100030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.100160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.100186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.100296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.100322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.897 qpair failed and we were unable to recover it. 00:25:30.897 [2024-07-12 12:02:20.100428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.897 [2024-07-12 12:02:20.100454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.100536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.100565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.100652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.100680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.100772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.100800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.100891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.100918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.101886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.101913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.102863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.102995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.103916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.103943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.104910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.104938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.105925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.105952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.106038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-12 12:02:20.106065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.898 qpair failed and we were unable to recover it. 00:25:30.898 [2024-07-12 12:02:20.106151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.106293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.106444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.106591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.106753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.106958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.107943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.107969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.108907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.108934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.109855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.109985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.110903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.110929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.111024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.111051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.111139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.111165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.111274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.111300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.899 [2024-07-12 12:02:20.111379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.899 [2024-07-12 12:02:20.111405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.899 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.111508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.111549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.111643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.111671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.111787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.111814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.111896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.111924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.112951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.112977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.113901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.113930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.114942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.114971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.115899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.115925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.116005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.116030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.116116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.116142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.116249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.116274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.116359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.116398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.900 qpair failed and we were unable to recover it. 00:25:30.900 [2024-07-12 12:02:20.116531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.900 [2024-07-12 12:02:20.116558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.116643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.116670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.116752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.116778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.116897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.116926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.117897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.117991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.118960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.118999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.119896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.119924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.120874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.120902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.901 qpair failed and we were unable to recover it. 00:25:30.901 [2024-07-12 12:02:20.121669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.901 [2024-07-12 12:02:20.121699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.121790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.121819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.121910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.121937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.122909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.122937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.123954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.123983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.124896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.124924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.125919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.125947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.126912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.126939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.902 [2024-07-12 12:02:20.127027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.902 [2024-07-12 12:02:20.127053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.902 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.127963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.127990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.128896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.128923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.129941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.129969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.130907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.130934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.131879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.131908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.132000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.132027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.132118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.132146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.132288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.132315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.132455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.132482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.903 qpair failed and we were unable to recover it. 00:25:30.903 [2024-07-12 12:02:20.132605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.903 [2024-07-12 12:02:20.132632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.132752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.132782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.132891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.132931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.133894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.133934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.134872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.134902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.135821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.135975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.136971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.136998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.137929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.904 [2024-07-12 12:02:20.137969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.904 qpair failed and we were unable to recover it. 00:25:30.904 [2024-07-12 12:02:20.138118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.138830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.138978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.139945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.139972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.140854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.140900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.141943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.141984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.905 [2024-07-12 12:02:20.142732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.905 [2024-07-12 12:02:20.142772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.905 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.142875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.142902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.143968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.143995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.144948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.144976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.145861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.145907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.146842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.146883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.147927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.147959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.148038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.148065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.906 [2024-07-12 12:02:20.148215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-12 12:02:20.148242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.906 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.148327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.148354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.148470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.148496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.148577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.148603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.148697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.148725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.148861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.148893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.149877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.149904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.150890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.150917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.151906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.151933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.152837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.152982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.153125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.153235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.153375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.153520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.907 qpair failed and we were unable to recover it. 00:25:30.907 [2024-07-12 12:02:20.153663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.907 [2024-07-12 12:02:20.153690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.153810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.153837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.153927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.153954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.154879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.154905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.155938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.155967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.156887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.156982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.157933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.157960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.158849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.158881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.159000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.159027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.908 [2024-07-12 12:02:20.159135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.908 [2024-07-12 12:02:20.159162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.908 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.159303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.159329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.159470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.159495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.159587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.159615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.159746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.159786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.159886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.159916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.160969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.160997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.161888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.161915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.162914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.162942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.163852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.163976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.164004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.164087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.164114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.909 qpair failed and we were unable to recover it. 00:25:30.909 [2024-07-12 12:02:20.164214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.909 [2024-07-12 12:02:20.164241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.164368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.164395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.164540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.164566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.164684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.164715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.164861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.164958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.164984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.165885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.165913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.166824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.166989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.167904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.167931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.168876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.168994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.910 [2024-07-12 12:02:20.169714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.910 qpair failed and we were unable to recover it. 00:25:30.910 [2024-07-12 12:02:20.169841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.169881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.170920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.170947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.171896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.171978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.172878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.172980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.173907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.173941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.174849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.174896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.175015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.175043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.175164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.175191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.175286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.911 [2024-07-12 12:02:20.175313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.911 qpair failed and we were unable to recover it. 00:25:30.911 [2024-07-12 12:02:20.175441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.175468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.175586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.175614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.175696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.175722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.175835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.175863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.175953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.175980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.176897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.176926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.177982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.178825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.178989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.179947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.179987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.180107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.180253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.180375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.180519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.912 [2024-07-12 12:02:20.180660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.912 qpair failed and we were unable to recover it. 00:25:30.912 [2024-07-12 12:02:20.180772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.180799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.180888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.180915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.181876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.181977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.182886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.182918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.183936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.183977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.184849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.184885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.185893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.185921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.186042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.186069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.913 qpair failed and we were unable to recover it. 00:25:30.913 [2024-07-12 12:02:20.186153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.913 [2024-07-12 12:02:20.186180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.186306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.186334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.186494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.186534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.186650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.186678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.186793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.186820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.186935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.186963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.187942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.187971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.188879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.188973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.189873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.189993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.190860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.190894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.191043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.191070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.191185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.191212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.914 [2024-07-12 12:02:20.191330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.914 [2024-07-12 12:02:20.191363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.914 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.191456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.191484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.191602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.191629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.191759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.191789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.191939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.191967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.192897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.192938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.193900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.193926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.194967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.194994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.195836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.195862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.915 [2024-07-12 12:02:20.196891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.915 [2024-07-12 12:02:20.196920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.915 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.197901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.197996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.198903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.198995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.199921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.199951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.200956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.200983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.201972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.916 [2024-07-12 12:02:20.201999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.916 qpair failed and we were unable to recover it. 00:25:30.916 [2024-07-12 12:02:20.202123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.202955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.202981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.203942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.203969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.204943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.204969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.205943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.205969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.206882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.206909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.207001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.207028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.917 qpair failed and we were unable to recover it. 00:25:30.917 [2024-07-12 12:02:20.207115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.917 [2024-07-12 12:02:20.207142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.207906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.207932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.208914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.208942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.209902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.209929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.210850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.210891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.211036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.211188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.211296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.211413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.918 [2024-07-12 12:02:20.211524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.918 qpair failed and we were unable to recover it. 00:25:30.918 [2024-07-12 12:02:20.211641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.211667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.211756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.211783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.211892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.211933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.212894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.212977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.213955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.213981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.214910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.214936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.215961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.215989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.216107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.216134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.216248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.216274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.216362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.216388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.216490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.919 [2024-07-12 12:02:20.216517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.919 qpair failed and we were unable to recover it. 00:25:30.919 [2024-07-12 12:02:20.216602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.216627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.216715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.216747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.216838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.216872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.216954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.216981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.217906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.217933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.218839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.218881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.219891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.219917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.220917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.220997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.221153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.221348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.221469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.221604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.920 qpair failed and we were unable to recover it. 00:25:30.920 [2024-07-12 12:02:20.221719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.920 [2024-07-12 12:02:20.221745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.221857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.221890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.221983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.222887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.222914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.223892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.223989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.224922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.224949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.225855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.225974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.226907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.226935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.227017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.227046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.921 [2024-07-12 12:02:20.227138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.921 [2024-07-12 12:02:20.227165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.921 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.227906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.227934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.228844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.228973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.229854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.229886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.230935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.230962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.231947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.231976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.922 qpair failed and we were unable to recover it. 00:25:30.922 [2024-07-12 12:02:20.232108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.922 [2024-07-12 12:02:20.232147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.232912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.232941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.233951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.233977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.234970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.234996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.235850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.923 [2024-07-12 12:02:20.235972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.923 [2024-07-12 12:02:20.236000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.923 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.236961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.236989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.237955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.237983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.238884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.238997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.239923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.239951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.240855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.240976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.241003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.241116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.241142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.241252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.924 [2024-07-12 12:02:20.241278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.924 qpair failed and we were unable to recover it. 00:25:30.924 [2024-07-12 12:02:20.241360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.241386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.241475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.241506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.241590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.241617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.241716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.241745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.241857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.241890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.242878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.242906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.243859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.243910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.244902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.244929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.245905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.245995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.246023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.246113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.246139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.246246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.246273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.246369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.246404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.925 qpair failed and we were unable to recover it. 00:25:30.925 [2024-07-12 12:02:20.246508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.925 [2024-07-12 12:02:20.246537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.246657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.246684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.246797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.246824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.246934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.246962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.247931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.247965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.248909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.248936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.249966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.249993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.250889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.250918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.926 qpair failed and we were unable to recover it. 00:25:30.926 [2024-07-12 12:02:20.251623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.926 [2024-07-12 12:02:20.251649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.251742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.251770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.251889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.251917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.252908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.252935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.253954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.253981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.254894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.254976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.927 [2024-07-12 12:02:20.255861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.927 qpair failed and we were unable to recover it. 00:25:30.927 [2024-07-12 12:02:20.255960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.255986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.256932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.256959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.257930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.257961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.258955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.258982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.259894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.259995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.260886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.260914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.261007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.261033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.261144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.261170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.928 qpair failed and we were unable to recover it. 00:25:30.928 [2024-07-12 12:02:20.261267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.928 [2024-07-12 12:02:20.261293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.261374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.261400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.261489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.261515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.261594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.261620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.261701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.261727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.261854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.261889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.262948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.262975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.263845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.263881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.264911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.264951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.265898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.265982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.266125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.266270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.266384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.266496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.929 [2024-07-12 12:02:20.266614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.929 [2024-07-12 12:02:20.266640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.929 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.266726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.266753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.266844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.266878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.266967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.266994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.267919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.267946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.268915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.268996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.269896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.269920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.270917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.270942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.271964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.930 [2024-07-12 12:02:20.271989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.930 qpair failed and we were unable to recover it. 00:25:30.930 [2024-07-12 12:02:20.272105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.272944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.272968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.273915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.273999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.274946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.274972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.275953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.275986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.276138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.276165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.276256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.276283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.276388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.276415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.276509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.276535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.931 qpair failed and we were unable to recover it. 00:25:30.931 [2024-07-12 12:02:20.276623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.931 [2024-07-12 12:02:20.276649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.276758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.276784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.276879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.276906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.277881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.277914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.278970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.278996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.279961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.279986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.280950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.280977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.932 [2024-07-12 12:02:20.281803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.932 qpair failed and we were unable to recover it. 00:25:30.932 [2024-07-12 12:02:20.281904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.281932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.282923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.282962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.283899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.283925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.284936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.284963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.285883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.285912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.286961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.286989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.287103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.287129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.287272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.933 [2024-07-12 12:02:20.287298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.933 qpair failed and we were unable to recover it. 00:25:30.933 [2024-07-12 12:02:20.287391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.287418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.287535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.287561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.287685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.287725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.287855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.287891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.288890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.288980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.289849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.289981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.290936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.290964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.291854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.291887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.292929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.292956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.293075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.293102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.293180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.934 [2024-07-12 12:02:20.293206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.934 qpair failed and we were unable to recover it. 00:25:30.934 [2024-07-12 12:02:20.293344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.293370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.293462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.293487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.293568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.293594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.293717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.293757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.293876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.293906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.294919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.294959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.295844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.295899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.296937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.296978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.297953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.297994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.298117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.298144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.298258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.298285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.298403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.298430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.935 qpair failed and we were unable to recover it. 00:25:30.935 [2024-07-12 12:02:20.298526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.935 [2024-07-12 12:02:20.298554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.298654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.298680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.298767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.298796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.298913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.298940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.299847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.299879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.300917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.300944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.301906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.301946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.302828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.302877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.303931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.303957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.936 [2024-07-12 12:02:20.304072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.936 [2024-07-12 12:02:20.304098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.936 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.304953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.304982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.305914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.305942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.306911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.306940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.307941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.307969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.308874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.308901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.309042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.309068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.309184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.309209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.309325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.309352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.309463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.309489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.937 [2024-07-12 12:02:20.309574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.937 [2024-07-12 12:02:20.309601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.937 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.309704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.309744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.309840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.309874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.309974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.310972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.310998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.311913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.311999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.312901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.312930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.313859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.313907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.314912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.938 [2024-07-12 12:02:20.314940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.938 qpair failed and we were unable to recover it. 00:25:30.938 [2024-07-12 12:02:20.315097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.315949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.315977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.316920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.316948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.317899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.317929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.318849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.318884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.319879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.319908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.939 qpair failed and we were unable to recover it. 00:25:30.939 [2024-07-12 12:02:20.320688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.939 [2024-07-12 12:02:20.320720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.320814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.320843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.320978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.321958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.321998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.322840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.322990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.323900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.323930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.324877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.324904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.325067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.325236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.325376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.325519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.940 [2024-07-12 12:02:20.325631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.940 qpair failed and we were unable to recover it. 00:25:30.940 [2024-07-12 12:02:20.325747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.325774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.325896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.325923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.326761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.326862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.327902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.327989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.328959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.328985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.329966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.329993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.330862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.330907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.331028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.331056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.331150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.331177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.331275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.331302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.331439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.331466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.941 qpair failed and we were unable to recover it. 00:25:30.941 [2024-07-12 12:02:20.331577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.941 [2024-07-12 12:02:20.331604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.331694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.331721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.331813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.331843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.331934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.331961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.332911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.332939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.333937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.333965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.334925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.334954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.335913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.335953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.336875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.336915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.337044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.337073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.337155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.337181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.337297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.337323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.942 qpair failed and we were unable to recover it. 00:25:30.942 [2024-07-12 12:02:20.337409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-07-12 12:02:20.337435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.337555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.337582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.337745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.337785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.337887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.337915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.338892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.338981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.339845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.339964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.340890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.340984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.341931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.341959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.342962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.342989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.343086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.343115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.343230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.343257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.943 [2024-07-12 12:02:20.343374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-07-12 12:02:20.343401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.943 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.343517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.343543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.343661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.343688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.343883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.343911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.344891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.344920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.345889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.345921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.346941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.346967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.347913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.347945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.348089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.348116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.348230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.348257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.944 qpair failed and we were unable to recover it. 00:25:30.944 [2024-07-12 12:02:20.348343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-07-12 12:02:20.348370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.348484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.348511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.348602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.348630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.348744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.348771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.348895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.348924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.349908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.349936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.350905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.350934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.351961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.351988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.352930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.352958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.353924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.353965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.354063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.945 [2024-07-12 12:02:20.354091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.945 qpair failed and we were unable to recover it. 00:25:30.945 [2024-07-12 12:02:20.354169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.354899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.354985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.355930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.355960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:30.946 [2024-07-12 12:02:20.356665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.946 [2024-07-12 12:02:20.356692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:30.946 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.356815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.356855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.356989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.357876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.357993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.358819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.358984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.359912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.359952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.360952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.360980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.361931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.361959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.362925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.362952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.363969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.363998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.364123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.364293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.364401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.364559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.244 [2024-07-12 12:02:20.364716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.244 qpair failed and we were unable to recover it. 00:25:31.244 [2024-07-12 12:02:20.364885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.364926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.365950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.365977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.366918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.366946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.367859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.367997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.368845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.368978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.369857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.369892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.370875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.370996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.371967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.371995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.372823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.372863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.373056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.373171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.373311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.373453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.245 [2024-07-12 12:02:20.373625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.245 qpair failed and we were unable to recover it. 00:25:31.245 [2024-07-12 12:02:20.373739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.373765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.373877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.373905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.374860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.374975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.375936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.375967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.376893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.376920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.377924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.377950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.378895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.378923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.379933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.379960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.246 qpair failed and we were unable to recover it. 00:25:31.246 [2024-07-12 12:02:20.380771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.246 [2024-07-12 12:02:20.380797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.380886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.380915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.381931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.381958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.382881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.382909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.383898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.383927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.384948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.384989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.385898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.385982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.386962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.386989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.387912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.387940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.388030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.388058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.247 qpair failed and we were unable to recover it. 00:25:31.247 [2024-07-12 12:02:20.388187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.247 [2024-07-12 12:02:20.388214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.388359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.388473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.388587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.388732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.388883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.388980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.389883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.389911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.390947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.390988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.391942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.391970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.392913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.392942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.393885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.393913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.394857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.394904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.248 [2024-07-12 12:02:20.395757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.248 qpair failed and we were unable to recover it. 00:25:31.248 [2024-07-12 12:02:20.395882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.395912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.396923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.396951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.397878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.397906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.398930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.398958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.399847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.400957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.400983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.401894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.401920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.402036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.402066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.402178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.402203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.402280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.402305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.249 [2024-07-12 12:02:20.402425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.249 [2024-07-12 12:02:20.402449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.249 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.402575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.402603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.402692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.402717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.402832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.402859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.403879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.403904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.404902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.404930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.405930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.405955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.406046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.406070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.406160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.406185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.406296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.406320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.406405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.406429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.250 [2024-07-12 12:02:20.406514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-07-12 12:02:20.406539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.250 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.406627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.406654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.406749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.406774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.406887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.406914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.407905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.407990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.408849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.408978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.409963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.409989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.410903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.410983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.411916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.411954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.412964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.412990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.413908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.413935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.414053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.414079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.251 qpair failed and we were unable to recover it. 00:25:31.251 [2024-07-12 12:02:20.414199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.251 [2024-07-12 12:02:20.414226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.414334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.414443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.414552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.414691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.414829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.414973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.415877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.415999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.416895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.416982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.417906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.417933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.418889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.418986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.419861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.419971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.420916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.420942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.421062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.421092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.421188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.421214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.421335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.421361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.421444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.252 [2024-07-12 12:02:20.421469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.252 qpair failed and we were unable to recover it. 00:25:31.252 [2024-07-12 12:02:20.421587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.421614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.421735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.421763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.421849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.421883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.422905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.422932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.423947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.423975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.424961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.424988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.425926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.425952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.426067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.253 [2024-07-12 12:02:20.426093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.253 qpair failed and we were unable to recover it. 00:25:31.253 [2024-07-12 12:02:20.426175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.426890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.426917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.427877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.427904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.428884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.428980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.429902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.429989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.254 qpair failed and we were unable to recover it. 00:25:31.254 [2024-07-12 12:02:20.430709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.254 [2024-07-12 12:02:20.430735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.430847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.430878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.430992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.431955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.431982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.432925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.432952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.433884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.433989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.434881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.434996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.255 [2024-07-12 12:02:20.435821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.255 [2024-07-12 12:02:20.435847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.255 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.436795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.436821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.437929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.437956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.438950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.438983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.439896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.439986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.440897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.440925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.441016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.441042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.256 [2024-07-12 12:02:20.441154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.256 [2024-07-12 12:02:20.441180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.256 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.441920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.441946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.442903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.442931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.443927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.443954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.444845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.444970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.445899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.445946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.446054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-07-12 12:02:20.446082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.257 qpair failed and we were unable to recover it. 00:25:31.257 [2024-07-12 12:02:20.446201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.446373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.446528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.446677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.446796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.446949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.446976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.447914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.447941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.448897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.448924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.449009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.449035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.449154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.449179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.258 [2024-07-12 12:02:20.449268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.258 [2024-07-12 12:02:20.449295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.258 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.449386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.449412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.449553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.449580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.449694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.449720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.449826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.449876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.450934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.450961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.451889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.451915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.452894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.452989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.453888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.453913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.454025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.454051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.454142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.454169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.259 qpair failed and we were unable to recover it. 00:25:31.259 [2024-07-12 12:02:20.454284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.259 [2024-07-12 12:02:20.454311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.454420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.454527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.454638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.454747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.454856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.454990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.455901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.455927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.456970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.456996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.457951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.457984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.260 [2024-07-12 12:02:20.458885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.260 qpair failed and we were unable to recover it. 00:25:31.260 [2024-07-12 12:02:20.458980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.459887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.459975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.460858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.460977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.461929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.461957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.462955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.462982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.261 [2024-07-12 12:02:20.463709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.261 [2024-07-12 12:02:20.463736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.261 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.463847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.463882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.463976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.464881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.464912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.465949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.465989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.466922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.466950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.467887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.467914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.468889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.262 [2024-07-12 12:02:20.468917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.262 qpair failed and we were unable to recover it. 00:25:31.262 [2024-07-12 12:02:20.469032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.469899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.469925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.470967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.470994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.471938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.471963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.472910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.472999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.473025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.263 qpair failed and we were unable to recover it. 00:25:31.263 [2024-07-12 12:02:20.473141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.263 [2024-07-12 12:02:20.473167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.473901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.473983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.474883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.474999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.475962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.475989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.476904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.476996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.477906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.477933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.478074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.264 qpair failed and we were unable to recover it. 00:25:31.264 [2024-07-12 12:02:20.478160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.264 [2024-07-12 12:02:20.478186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.478958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.478984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.479901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.479992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.480919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.480997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.481911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.481945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.482031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.482059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.482150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.482178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.482296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.482324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-07-12 12:02:20.482465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.265 qpair failed and we were unable to recover it. 00:25:31.265 [2024-07-12 12:02:20.482571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.482599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.482690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.482717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.482813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.482853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.482959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.482986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.483900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.483985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.484905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.484932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.485952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.485979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.486863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.486985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.487011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.487124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-07-12 12:02:20.487150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.266 qpair failed and we were unable to recover it. 00:25:31.266 [2024-07-12 12:02:20.487239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.487900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.487982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.488927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.488952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.489931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.489957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.490972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.490998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.491088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.491114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.491198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.491223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.491343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.491369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.491480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.491506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.267 qpair failed and we were unable to recover it. 00:25:31.267 [2024-07-12 12:02:20.491615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.267 [2024-07-12 12:02:20.491640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.491720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.491746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.491829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.491855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.491958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.491998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.492895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.492984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.493955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.493984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.494070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.494097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.494223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.494250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.494365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.268 [2024-07-12 12:02:20.494391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.268 qpair failed and we were unable to recover it. 00:25:31.268 [2024-07-12 12:02:20.494479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.494505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.494616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.494642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.494760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.494786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.494877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.494904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.494992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.495854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.495975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.496863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.496987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.497952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.497979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.498875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.498995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.499931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.499958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.500067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.500093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.500208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.500235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.500351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.269 [2024-07-12 12:02:20.500378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.269 qpair failed and we were unable to recover it. 00:25:31.269 [2024-07-12 12:02:20.500465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.500491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.500579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.500605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.500718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.500744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.500856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.500889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.500981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.501949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.501990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.502938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.502965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.503950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.503979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.504957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.504984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.505893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.505975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.506904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.506984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.507010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.507097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.507122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.270 qpair failed and we were unable to recover it. 00:25:31.270 [2024-07-12 12:02:20.507230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.270 [2024-07-12 12:02:20.507256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.507972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.507999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.508944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.508970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.509852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.509981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.510876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.510903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.511907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.511936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.512900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.512927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.513958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.513986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.514881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.514908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.271 qpair failed and we were unable to recover it. 00:25:31.271 [2024-07-12 12:02:20.515943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.271 [2024-07-12 12:02:20.515971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.516900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.516987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.517883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.517910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.518906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.518932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.519905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.519997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.520952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.520978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.521852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.521888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.522893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.522921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.523861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.523893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.524880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.524920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.525044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.525072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.525192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.525218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.525332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.272 [2024-07-12 12:02:20.525358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.272 qpair failed and we were unable to recover it. 00:25:31.272 [2024-07-12 12:02:20.525465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.525491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.525638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.525665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.525755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.525781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.525894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.525921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.526825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.526851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.527849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.527883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.528900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.528982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.529893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.529920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.530929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.530959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.531951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.531979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.532938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.532966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.533874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.533991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.534893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.534921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.535935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.535962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.273 [2024-07-12 12:02:20.536079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.273 [2024-07-12 12:02:20.536105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.273 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.536929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.536957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.537860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.537989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.538853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.538987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.539897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.539924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.540904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.540996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.541971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.541999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.542968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.542997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.543911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.543996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.544904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.544992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.545862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.545896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.274 [2024-07-12 12:02:20.546014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.274 [2024-07-12 12:02:20.546041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.274 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.546189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.546333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.546575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.546724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.546861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.546990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.547948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.547977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.548959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.548987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.549950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.549978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.550948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.550975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.551890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.551918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.552884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.552979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.553920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.553949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.554966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.554992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.555859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.555987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.275 [2024-07-12 12:02:20.556725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.275 qpair failed and we were unable to recover it. 00:25:31.275 [2024-07-12 12:02:20.556863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.556895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.557935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.557975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.558879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.558997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.559910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.559937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.560904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.560989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.561907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.561991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.562935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.562962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.563953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.563980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.564839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.564998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.565905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.565933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.566951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.566979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.276 [2024-07-12 12:02:20.567099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.276 [2024-07-12 12:02:20.567127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.276 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.567919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.567946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.568903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.568931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.569882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.569923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.570925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.570954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.571946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.571973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.572903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.572930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.573827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.573884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.574880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.574911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.575894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.575988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.576947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.277 [2024-07-12 12:02:20.576987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.277 qpair failed and we were unable to recover it. 00:25:31.277 [2024-07-12 12:02:20.577109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.577913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.577942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.578876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.578904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.579939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.579965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.580820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.580979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.581893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.581920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.582904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.582932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.583876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.583917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.584901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.584997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.585929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.585970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.586129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.586301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.586446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.586586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.278 [2024-07-12 12:02:20.586731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.278 qpair failed and we were unable to recover it. 00:25:31.278 [2024-07-12 12:02:20.586827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.586856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.587948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.587977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.588950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.588978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.589951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.589978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.590858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.590974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.591875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.591995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.592897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.592991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.593874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.593988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.594864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.594910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.595005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.595033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.595177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.279 [2024-07-12 12:02:20.595203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.279 qpair failed and we were unable to recover it. 00:25:31.279 [2024-07-12 12:02:20.595315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.595431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.595547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.595687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.595799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.595938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.595966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.596905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.596993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.597940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.597968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.598887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.598916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.599840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.599875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.600937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.600964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.601072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.601242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.601362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.601477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-07-12 12:02:20.601618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.280 qpair failed and we were unable to recover it. 00:25:31.280 [2024-07-12 12:02:20.601730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.601757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.601847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.601884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.602968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.602994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.603883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.603911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.604875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.604988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.605936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.605976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.606924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.606953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.607963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.607989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.608097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.608123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.608203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.608229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.281 [2024-07-12 12:02:20.608322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.281 [2024-07-12 12:02:20.608347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.281 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.608443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.608471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.608588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.608615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.608727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.608754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.608876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.608903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.609933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.609960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.610923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.610964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.611894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.611921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.612940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.612968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.613875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.613903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.614861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.614901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.615024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.282 [2024-07-12 12:02:20.615052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.282 qpair failed and we were unable to recover it. 00:25:31.282 [2024-07-12 12:02:20.615167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.615313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.615456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.615622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.615746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.615895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.615922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.616960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.616987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.617960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.617986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.618885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.618925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.619905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.619958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.620944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.620971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.621089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.621116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.621214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.621241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.621359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.283 [2024-07-12 12:02:20.621385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.283 qpair failed and we were unable to recover it. 00:25:31.283 [2024-07-12 12:02:20.621474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.621501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.621593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.621620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.621703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.621729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.621849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.621882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.621980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.622893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.622924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.623963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.623990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.624955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.624982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.625954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.625995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.626970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.626997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.627939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.627966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.628057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.628086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.628209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.628240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.628333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.284 [2024-07-12 12:02:20.628360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.284 qpair failed and we were unable to recover it. 00:25:31.284 [2024-07-12 12:02:20.628448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.628475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.628563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.628591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.628680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.628707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.628850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.628883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.628972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.628999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.629938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.629965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.630829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.630873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.631969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.631996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.632879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.632907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.633905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.633989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.634881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.634909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.635970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.635999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.636897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.636988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.285 [2024-07-12 12:02:20.637767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.285 [2024-07-12 12:02:20.637796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.285 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.637885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.637912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.638910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.638951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.639861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.639894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.640940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.640967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.641940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.641967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.642970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.642996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.643897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.643986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.644938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.644964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.645860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.645994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.646929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.286 [2024-07-12 12:02:20.646956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.286 qpair failed and we were unable to recover it. 00:25:31.286 [2024-07-12 12:02:20.647068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.647917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.647944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.648871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.648898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.649950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.649977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.650807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.650847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.651885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.651912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.652885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.652986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.653862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.653892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.654921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.654947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.655848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.656907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.656993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.657168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.657283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.657395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.657547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.287 qpair failed and we were unable to recover it. 00:25:31.287 [2024-07-12 12:02:20.657662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.287 [2024-07-12 12:02:20.657689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.657830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.657857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.657963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.657990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.658942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.658983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.659906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.659932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.660896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.660982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.661887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.661915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.662890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.662984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.663957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.663984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.664894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.664921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.665861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.665980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.666952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.666979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.667102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.667128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.288 [2024-07-12 12:02:20.667243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.288 [2024-07-12 12:02:20.667269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.288 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.667379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.667405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.667525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.667644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.667671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.667785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.667811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.667920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.667952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.668974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.669922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.669950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.670893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.670920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.671858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.671892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.672856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.672976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.673881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.673922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.674945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.674973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.675923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.675951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.676888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.676915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.677860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.677985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.289 [2024-07-12 12:02:20.678013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.289 qpair failed and we were unable to recover it. 00:25:31.289 [2024-07-12 12:02:20.678099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.678947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.678973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.679907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.679993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.680949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.680977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.681799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.681915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1031663 Killed "${NVMF_APP[@]}" "$@" 00:25:31.290 [2024-07-12 12:02:20.681956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.682110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.682139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.682259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:31.290 [2024-07-12 12:02:20.682287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:31.290 [2024-07-12 12:02:20.682414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.682443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:31.290 [2024-07-12 12:02:20.682593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.682626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:31.290 [2024-07-12 12:02:20.682720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.682747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.290 [2024-07-12 12:02:20.682887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.682927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.683961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.683989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.684903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.684955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.685901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.685954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.686079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.686107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.686235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.686262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.686350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.686376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.686521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.686548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1032165 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1032165 00:25:31.290 [2024-07-12 12:02:20.687483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.687515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.687642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.687671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1032165 ']' 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.687792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.687820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 [2024-07-12 12:02:20.687923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.687950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:31.290 [2024-07-12 12:02:20.688069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 [2024-07-12 12:02:20.688098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.290 qpair failed and we were unable to recover it. 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.290 [2024-07-12 12:02:20.688256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.290 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:31.291 [2024-07-12 12:02:20.688283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 12:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.291 [2024-07-12 12:02:20.688405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.688433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.688535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.688566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.689849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.689982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.690941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.690968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.691879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.691918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.692886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.692912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.693926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.693953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.694898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.694979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.695890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.695977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.696918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.696945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.697886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.291 [2024-07-12 12:02:20.697989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.291 [2024-07-12 12:02:20.698017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.291 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.698897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.698926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.699967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.699994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.700900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.700992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.559 [2024-07-12 12:02:20.701897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.559 qpair failed and we were unable to recover it. 00:25:31.559 [2024-07-12 12:02:20.701996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.702888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.702986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.703877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.703904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.704905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.704932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.705928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.705955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.560 [2024-07-12 12:02:20.706784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.560 [2024-07-12 12:02:20.706824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.560 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.706924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.706953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.707925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.707962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.708897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.708923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.709878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.709907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.710920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.710947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.711901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.711928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.561 [2024-07-12 12:02:20.712018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.561 [2024-07-12 12:02:20.712044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.561 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.712912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.712940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.713033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.713063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.713158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.713185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.713299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.713327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.714045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.714077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.714211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.714240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.714891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.714921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.715026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.715053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.715749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.715778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.715933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.715962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.716931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.716958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.717892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.717933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.718022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.718049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.718836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.718916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.719023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.719051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.719766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.719796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.719917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.719945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.562 qpair failed and we were unable to recover it. 00:25:31.562 [2024-07-12 12:02:20.720752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.562 [2024-07-12 12:02:20.720778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.720905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.720933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.721942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.721968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.722089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.722115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.722822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.722863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.722973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.723904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.723983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.724909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.724937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.563 [2024-07-12 12:02:20.725842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.563 [2024-07-12 12:02:20.725891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.563 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.726862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.726909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.727842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.727977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.728901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.728928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.729888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.729920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.730835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.730882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.731037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.731063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.731174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.731205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.731300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.731326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.731438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.564 [2024-07-12 12:02:20.731465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.564 qpair failed and we were unable to recover it. 00:25:31.564 [2024-07-12 12:02:20.731589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.731615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.731730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.731757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.731854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.731905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.732888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.732974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.733994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.734694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.734724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.734826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.734854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.735713] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:31.565 [2024-07-12 12:02:20.735793] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.565 [2024-07-12 12:02:20.735887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.735916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.736899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.736927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.737011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.737037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.737118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.737143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.737240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.737266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.737383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.737410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.565 qpair failed and we were unable to recover it. 00:25:31.565 [2024-07-12 12:02:20.737506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.565 [2024-07-12 12:02:20.737532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.737620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.737647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.737733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.737759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.737888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.737915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.738893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.738921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.739897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.739978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.740907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.740936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.741960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.741987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.566 [2024-07-12 12:02:20.742790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.566 qpair failed and we were unable to recover it. 00:25:31.566 [2024-07-12 12:02:20.742883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.742910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.743892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.743919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.744844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.744982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.745947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.745984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.746824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.746980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.567 qpair failed and we were unable to recover it. 00:25:31.567 [2024-07-12 12:02:20.747761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.567 [2024-07-12 12:02:20.747788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.747927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.747967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.748962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.748988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.749900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.749941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.750909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.750938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.751881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.751908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.752874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.752902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.753019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.753046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.753159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.753198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.753287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.568 [2024-07-12 12:02:20.753313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.568 qpair failed and we were unable to recover it. 00:25:31.568 [2024-07-12 12:02:20.753426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.753465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.753602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.753629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.753722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.753749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.753834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.753878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.753958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.753984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.754900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.754982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.755905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.755934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.756903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.756930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.757946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.757973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.758055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.758081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.758161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.758192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.758280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.758307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.758397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.758426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.569 qpair failed and we were unable to recover it. 00:25:31.569 [2024-07-12 12:02:20.758514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.569 [2024-07-12 12:02:20.758541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.758630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.758656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.758774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.758801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.758923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.758950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.759909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.759999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.760909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.760935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.761948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.761974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.762920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.762949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.570 qpair failed and we were unable to recover it. 00:25:31.570 [2024-07-12 12:02:20.763851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.570 [2024-07-12 12:02:20.763898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.764961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.764987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.765844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.765995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.766960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.766989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.767923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.767950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.768851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.768978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.769004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.769096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.769122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.769214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.769239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.571 qpair failed and we were unable to recover it. 00:25:31.571 [2024-07-12 12:02:20.769357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.571 [2024-07-12 12:02:20.769386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.769493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.769520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.769612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.769640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.769758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.769785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.769907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.769935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.770966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.770993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.771855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1349f90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.771998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.772185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.772337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.572 [2024-07-12 12:02:20.772485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.772603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.772751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.772872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.772899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.572 qpair failed and we were unable to recover it. 00:25:31.572 [2024-07-12 12:02:20.773690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.572 [2024-07-12 12:02:20.773717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.773818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.773845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.773948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.773973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.774889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.774917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.775913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.775941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.776904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.776931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.777957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.777982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.778952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.778979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.779070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.573 [2024-07-12 12:02:20.779095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.573 qpair failed and we were unable to recover it. 00:25:31.573 [2024-07-12 12:02:20.779242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.779882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.779986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.780131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba74000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 A controller has encountered a failure and is being reset. 00:25:31.574 [2024-07-12 12:02:20.780295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba6c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.780456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.780576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.780721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba7c000b90 with addr=10.0.0.2, port=4420 00:25:31.574 qpair failed and we were unable to recover it. 00:25:31.574 [2024-07-12 12:02:20.780923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.574 [2024-07-12 12:02:20.780971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1357bb0 with addr=10.0.0.2, port=4420 00:25:31.574 [2024-07-12 12:02:20.780991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357bb0 is same with the state(5) to be set 00:25:31.574 [2024-07-12 12:02:20.781017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1357bb0 (9): Bad file descriptor 00:25:31.574 [2024-07-12 12:02:20.781035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.574 [2024-07-12 12:02:20.781049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.574 [2024-07-12 12:02:20.781064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.574 Unable to reset the controller. 00:25:31.574 [2024-07-12 12:02:20.810128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.574 [2024-07-12 12:02:20.922527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.574 [2024-07-12 12:02:20.922582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.574 [2024-07-12 12:02:20.922608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.574 [2024-07-12 12:02:20.922620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.574 [2024-07-12 12:02:20.922632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.574 [2024-07-12 12:02:20.922686] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:25:31.574 [2024-07-12 12:02:20.922741] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:25:31.574 [2024-07-12 12:02:20.922790] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:25:31.574 [2024-07-12 12:02:20.922793] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 Malloc0 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 [2024-07-12 12:02:21.775407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.508 [2024-07-12 12:02:21.781443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.508 [2024-07-12 12:02:21.781506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1357bb0 with addr=10.0.0.2, port=4420 00:25:32.508 [2024-07-12 12:02:21.781535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357bb0 is same with the state(5) to be set 00:25:32.508 [2024-07-12 12:02:21.781570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1357bb0 (9): Bad file descriptor 00:25:32.508 [2024-07-12 12:02:21.781589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:32.508 [2024-07-12 12:02:21.781602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:32.508 [2024-07-12 12:02:21.781618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.508 Unable to reset the controller. 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 [2024-07-12 12:02:21.803627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.508 12:02:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1031690 00:25:33.450 qpair failed and we were unable to recover it. 00:25:33.450 qpair failed and we were unable to recover it. 00:25:33.450 qpair failed and we were unable to recover it. 00:25:33.450 Controller properly reset. 00:25:37.647 Initializing NVMe Controllers 00:25:37.647 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:37.647 Initialization complete. Launching workers. 00:25:37.647 Starting thread on core 1 00:25:37.647 Starting thread on core 2 00:25:37.647 Starting thread on core 3 00:25:37.647 Starting thread on core 0 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:37.647 00:25:37.647 real 0m10.606s 00:25:37.647 user 0m34.017s 00:25:37.647 sys 0m8.099s 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.647 ************************************ 00:25:37.647 END TEST nvmf_target_disconnect_tc2 00:25:37.647 ************************************ 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.647 rmmod nvme_tcp 00:25:37.647 rmmod nvme_fabrics 00:25:37.647 rmmod nvme_keyring 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1032165 ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1032165 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1032165 ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1032165 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1032165 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1032165' 00:25:37.647 killing process with pid 1032165 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1032165 00:25:37.647 12:02:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1032165 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.905 12:02:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.440 12:02:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:40.440 00:25:40.440 real 0m15.478s 00:25:40.440 user 0m59.154s 00:25:40.440 sys 0m10.506s 00:25:40.440 12:02:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:40.440 12:02:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 ************************************ 00:25:40.440 END TEST nvmf_target_disconnect 00:25:40.440 ************************************ 00:25:40.440 12:02:29 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:25:40.440 12:02:29 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:40.440 12:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 12:02:29 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:40.440 00:25:40.440 real 19m24.120s 00:25:40.440 user 46m52.695s 00:25:40.440 sys 4m48.683s 00:25:40.440 12:02:29 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:40.440 12:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 ************************************ 00:25:40.440 END TEST nvmf_tcp 00:25:40.440 ************************************ 00:25:40.440 12:02:29 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:40.440 12:02:29 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:40.440 12:02:29 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:40.440 12:02:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:40.440 12:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 ************************************ 00:25:40.440 START TEST spdkcli_nvmf_tcp 00:25:40.440 ************************************ 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:40.440 * Looking for test storage... 00:25:40.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1033302 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1033302 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1033302 ']' 00:25:40.440 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.441 [2024-07-12 12:02:29.508124] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:25:40.441 [2024-07-12 12:02:29.508219] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033302 ] 00:25:40.441 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.441 [2024-07-12 12:02:29.563580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:40.441 [2024-07-12 12:02:29.671880] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.441 [2024-07-12 12:02:29.671885] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.441 12:02:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:40.441 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:40.441 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:40.441 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:40.441 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:40.441 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:40.441 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:40.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:40.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:40.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:40.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:40.441 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:40.441 ' 00:25:42.971 [2024-07-12 12:02:32.318211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.349 [2024-07-12 12:02:33.542534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:46.884 [2024-07-12 12:02:35.817843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:48.262 [2024-07-12 12:02:37.752034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:50.168 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:50.168 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:50.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:50.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:50.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:50.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:50.168 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:50.168 12:02:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.427 12:02:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:50.427 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:50.427 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:50.427 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:50.427 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:50.427 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:50.427 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:50.427 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:50.427 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:50.427 ' 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:55.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:55.718 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:55.718 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:55.718 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1033302 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1033302 ']' 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1033302 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1033302 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1033302' 00:25:55.718 killing process with pid 1033302 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1033302 00:25:55.718 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1033302 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1033302 ']' 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1033302 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1033302 ']' 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1033302 00:25:55.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1033302) - No such process 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1033302 is not found' 00:25:55.975 Process with pid 1033302 is not found 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:55.975 00:25:55.975 real 0m15.982s 00:25:55.975 user 0m33.657s 00:25:55.975 sys 0m0.826s 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:55.975 12:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.975 ************************************ 00:25:55.975 END TEST spdkcli_nvmf_tcp 00:25:55.975 ************************************ 00:25:55.975 12:02:45 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:55.975 12:02:45 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:55.975 12:02:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:55.975 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:25:55.975 ************************************ 00:25:55.975 START TEST nvmf_identify_passthru 00:25:55.975 ************************************ 00:25:55.976 12:02:45 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:56.234 * Looking for test storage... 00:25:56.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:56.234 12:02:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.234 12:02:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:56.234 12:02:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.234 12:02:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.234 12:02:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:56.234 12:02:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.234 12:02:45 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.234 12:02:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:58.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:58.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:58.131 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:58.131 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.131 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:25:58.131 00:25:58.131 --- 10.0.0.2 ping statistics --- 00:25:58.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.131 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:58.132 00:25:58.132 --- 10.0.0.1 ping statistics --- 00:25:58.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.132 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.132 12:02:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.132 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.132 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:58.132 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:25:58.390 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:25:58.390 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:88:00.0 00:25:58.390 12:02:47 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:88:00.0 00:25:58.390 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:58.390 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:58.390 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:58.390 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:58.390 12:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:58.390 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.641 12:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:26:02.641 12:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:02.641 12:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:02.641 12:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:02.641 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.831 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:06.831 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:06.831 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:06.831 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:06.831 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:06.831 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:06.831 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:06.832 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1037917 00:26:06.832 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:06.832 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.832 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1037917 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1037917 ']' 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:06.832 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:06.832 [2024-07-12 12:02:56.138453] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:26:06.832 [2024-07-12 12:02:56.138535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.832 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.832 [2024-07-12 12:02:56.204083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.832 [2024-07-12 12:02:56.316508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.832 [2024-07-12 12:02:56.316568] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.832 [2024-07-12 12:02:56.316581] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.832 [2024-07-12 12:02:56.316592] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.832 [2024-07-12 12:02:56.316602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.832 [2024-07-12 12:02:56.316687] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.832 [2024-07-12 12:02:56.316749] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.832 [2024-07-12 12:02:56.316818] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.832 [2024-07-12 12:02:56.316821] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:26:07.091 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.091 INFO: Log level set to 20 00:26:07.091 INFO: Requests: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "method": "nvmf_set_config", 00:26:07.091 "id": 1, 00:26:07.091 "params": { 00:26:07.091 "admin_cmd_passthru": { 00:26:07.091 "identify_ctrlr": true 00:26:07.091 } 00:26:07.091 } 00:26:07.091 } 00:26:07.091 00:26:07.091 INFO: response: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "id": 1, 00:26:07.091 "result": true 00:26:07.091 } 00:26:07.091 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.091 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.091 INFO: Setting log level to 20 00:26:07.091 INFO: Setting log level to 20 00:26:07.091 INFO: Log level set to 20 00:26:07.091 INFO: Log level set to 20 00:26:07.091 INFO: Requests: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "method": "framework_start_init", 00:26:07.091 "id": 1 00:26:07.091 } 00:26:07.091 00:26:07.091 INFO: Requests: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "method": "framework_start_init", 00:26:07.091 "id": 1 00:26:07.091 } 00:26:07.091 00:26:07.091 [2024-07-12 12:02:56.461248] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:07.091 INFO: response: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "id": 1, 00:26:07.091 "result": true 00:26:07.091 } 00:26:07.091 00:26:07.091 INFO: response: 00:26:07.091 { 00:26:07.091 "jsonrpc": "2.0", 00:26:07.091 "id": 1, 00:26:07.091 "result": true 00:26:07.091 } 00:26:07.091 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.091 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.091 INFO: Setting log level to 40 00:26:07.091 INFO: Setting log level to 40 00:26:07.091 INFO: Setting log level to 40 00:26:07.091 [2024-07-12 12:02:56.471358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.091 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.091 12:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.091 12:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.377 Nvme0n1 00:26:10.377 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.377 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 [2024-07-12 12:02:59.368071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 [ 00:26:10.378 { 00:26:10.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:10.378 "subtype": "Discovery", 00:26:10.378 "listen_addresses": [], 00:26:10.378 "allow_any_host": true, 00:26:10.378 "hosts": [] 00:26:10.378 }, 00:26:10.378 { 00:26:10.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.378 "subtype": "NVMe", 00:26:10.378 "listen_addresses": [ 00:26:10.378 { 00:26:10.378 "trtype": "TCP", 00:26:10.378 "adrfam": "IPv4", 00:26:10.378 "traddr": "10.0.0.2", 00:26:10.378 "trsvcid": "4420" 00:26:10.378 } 00:26:10.378 ], 00:26:10.378 "allow_any_host": true, 00:26:10.378 "hosts": [], 00:26:10.378 "serial_number": "SPDK00000000000001", 00:26:10.378 "model_number": "SPDK bdev Controller", 00:26:10.378 "max_namespaces": 1, 00:26:10.378 "min_cntlid": 1, 00:26:10.378 "max_cntlid": 65519, 00:26:10.378 "namespaces": [ 00:26:10.378 { 00:26:10.378 "nsid": 1, 00:26:10.378 "bdev_name": "Nvme0n1", 00:26:10.378 "name": "Nvme0n1", 00:26:10.378 "nguid": "5AFC75F23754474AA445F8F7617A3963", 00:26:10.378 "uuid": "5afc75f2-3754-474a-a445-f8f7617a3963" 00:26:10.378 } 00:26:10.378 ] 00:26:10.378 } 00:26:10.378 ] 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:10.378 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:10.378 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:10.378 12:02:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.378 rmmod nvme_tcp 00:26:10.378 rmmod nvme_fabrics 00:26:10.378 rmmod nvme_keyring 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1037917 ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1037917 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1037917 ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1037917 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1037917 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1037917' 00:26:10.378 killing process with pid 1037917 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1037917 00:26:10.378 12:02:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1037917 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.276 12:03:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.276 12:03:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:12.276 12:03:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.182 12:03:03 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.182 00:26:14.182 real 0m18.009s 00:26:14.182 user 0m26.626s 00:26:14.182 sys 0m2.332s 00:26:14.182 12:03:03 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:14.182 12:03:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:14.183 ************************************ 00:26:14.183 END TEST nvmf_identify_passthru 00:26:14.183 ************************************ 00:26:14.183 12:03:03 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:14.183 12:03:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:14.183 12:03:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:14.183 12:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:14.183 ************************************ 00:26:14.183 START TEST nvmf_dif 00:26:14.183 ************************************ 00:26:14.183 12:03:03 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:14.183 * Looking for test storage... 00:26:14.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.183 12:03:03 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.183 12:03:03 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.183 12:03:03 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.183 12:03:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.183 12:03:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.183 12:03:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.183 12:03:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:14.183 12:03:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:14.183 12:03:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.183 12:03:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:14.183 12:03:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.183 12:03:03 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.183 12:03:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:16.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.083 12:03:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:16.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:16.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:16.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:26:16.084 00:26:16.084 --- 10.0.0.2 ping statistics --- 00:26:16.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.084 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:26:16.084 00:26:16.084 --- 10.0.0.1 ping statistics --- 00:26:16.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.084 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:16.084 12:03:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:17.458 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:17.458 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:17.458 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:17.459 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:17.459 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:17.459 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:17.459 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:17.459 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:17.459 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:17.459 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:17.459 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:17.459 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:17.459 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:17.459 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:17.459 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:17.459 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:17.459 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.459 12:03:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:17.459 12:03:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1041180 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:17.459 12:03:06 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1041180 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1041180 ']' 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:17.459 12:03:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.459 [2024-07-12 12:03:06.911696] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:26:17.459 [2024-07-12 12:03:06.911780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.459 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.717 [2024-07-12 12:03:06.979799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.717 [2024-07-12 12:03:07.100079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.717 [2024-07-12 12:03:07.100133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.717 [2024-07-12 12:03:07.100163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.717 [2024-07-12 12:03:07.100175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.717 [2024-07-12 12:03:07.100185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.717 [2024-07-12 12:03:07.100220] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.717 12:03:07 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:17.717 12:03:07 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:26:17.976 12:03:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 12:03:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.976 12:03:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:17.976 12:03:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 [2024-07-12 12:03:07.236687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.976 12:03:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 ************************************ 00:26:17.976 START TEST fio_dif_1_default 00:26:17.976 ************************************ 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 bdev_null0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:17.976 [2024-07-12 12:03:07.297012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.976 { 00:26:17.976 "params": { 00:26:17.976 "name": "Nvme$subsystem", 00:26:17.976 "trtype": "$TEST_TRANSPORT", 00:26:17.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.976 "adrfam": "ipv4", 00:26:17.976 "trsvcid": "$NVMF_PORT", 00:26:17.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.976 "hdgst": ${hdgst:-false}, 00:26:17.976 "ddgst": ${ddgst:-false} 00:26:17.976 }, 00:26:17.976 "method": "bdev_nvme_attach_controller" 00:26:17.976 } 00:26:17.976 EOF 00:26:17.976 )") 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.976 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.977 "params": { 00:26:17.977 "name": "Nvme0", 00:26:17.977 "trtype": "tcp", 00:26:17.977 "traddr": "10.0.0.2", 00:26:17.977 "adrfam": "ipv4", 00:26:17.977 "trsvcid": "4420", 00:26:17.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.977 "hdgst": false, 00:26:17.977 "ddgst": false 00:26:17.977 }, 00:26:17.977 "method": "bdev_nvme_attach_controller" 00:26:17.977 }' 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.977 12:03:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.234 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:18.234 fio-3.35 00:26:18.234 Starting 1 thread 00:26:18.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.429 00:26:30.429 filename0: (groupid=0, jobs=1): err= 0: pid=1041766: Fri Jul 12 12:03:18 2024 00:26:30.429 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10038msec) 00:26:30.429 slat (nsec): min=6619, max=49416, avg=8301.79, stdev=2780.60 00:26:30.429 clat (usec): min=530, max=49278, avg=21105.45, stdev=20467.74 00:26:30.429 lat (usec): min=537, max=49298, avg=21113.75, stdev=20467.51 00:26:30.429 clat percentiles (usec): 00:26:30.429 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 594], 00:26:30.429 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:26:30.429 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:26:30.429 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:26:30.429 | 99.99th=[49021] 00:26:30.429 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:26:30.429 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:26:30.429 lat (usec) : 750=49.89% 00:26:30.429 lat (msec) : 50=50.11% 00:26:30.429 cpu : usr=89.73%, sys=9.99%, ctx=19, majf=0, minf=317 00:26:30.429 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.429 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.429 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:30.429 00:26:30.429 Run status group 0 (all jobs): 00:26:30.429 READ: bw=757KiB/s (775kB/s), 757KiB/s-757KiB/s (775kB/s-775kB/s), io=7600KiB (7782kB), run=10038-10038msec 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.429 00:26:30.429 real 0m11.257s 00:26:30.429 user 0m10.104s 00:26:30.429 sys 0m1.311s 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:30.429 ************************************ 00:26:30.429 END TEST fio_dif_1_default 00:26:30.429 ************************************ 00:26:30.429 12:03:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:30.429 12:03:18 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:30.429 12:03:18 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:30.429 12:03:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:30.429 ************************************ 00:26:30.429 START TEST fio_dif_1_multi_subsystems 00:26:30.429 ************************************ 00:26:30.429 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 bdev_null0 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 [2024-07-12 12:03:18.599271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 bdev_null1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.430 { 00:26:30.430 "params": { 00:26:30.430 "name": "Nvme$subsystem", 00:26:30.430 "trtype": "$TEST_TRANSPORT", 00:26:30.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.430 "adrfam": "ipv4", 00:26:30.430 "trsvcid": "$NVMF_PORT", 00:26:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.430 "hdgst": ${hdgst:-false}, 00:26:30.430 "ddgst": ${ddgst:-false} 00:26:30.430 }, 00:26:30.430 "method": "bdev_nvme_attach_controller" 00:26:30.430 } 00:26:30.430 EOF 00:26:30.430 )") 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.430 { 00:26:30.430 "params": { 00:26:30.430 "name": "Nvme$subsystem", 00:26:30.430 "trtype": "$TEST_TRANSPORT", 00:26:30.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.430 "adrfam": "ipv4", 00:26:30.430 "trsvcid": "$NVMF_PORT", 00:26:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.430 "hdgst": ${hdgst:-false}, 00:26:30.430 "ddgst": ${ddgst:-false} 00:26:30.430 }, 00:26:30.430 "method": "bdev_nvme_attach_controller" 00:26:30.430 } 00:26:30.430 EOF 00:26:30.430 )") 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.430 "params": { 00:26:30.430 "name": "Nvme0", 00:26:30.430 "trtype": "tcp", 00:26:30.430 "traddr": "10.0.0.2", 00:26:30.430 "adrfam": "ipv4", 00:26:30.430 "trsvcid": "4420", 00:26:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.430 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.430 "hdgst": false, 00:26:30.430 "ddgst": false 00:26:30.430 }, 00:26:30.430 "method": "bdev_nvme_attach_controller" 00:26:30.430 },{ 00:26:30.430 "params": { 00:26:30.430 "name": "Nvme1", 00:26:30.430 "trtype": "tcp", 00:26:30.430 "traddr": "10.0.0.2", 00:26:30.430 "adrfam": "ipv4", 00:26:30.430 "trsvcid": "4420", 00:26:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.430 "hdgst": false, 00:26:30.430 "ddgst": false 00:26:30.430 }, 00:26:30.430 "method": "bdev_nvme_attach_controller" 00:26:30.430 }' 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:30.430 12:03:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.430 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:30.430 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:30.430 fio-3.35 00:26:30.430 Starting 2 threads 00:26:30.430 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.403 00:26:40.403 filename0: (groupid=0, jobs=1): err= 0: pid=1043320: Fri Jul 12 12:03:29 2024 00:26:40.403 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10032msec) 00:26:40.403 slat (nsec): min=6134, max=49967, avg=9633.68, stdev=2839.72 00:26:40.403 clat (usec): min=40819, max=43253, avg=41081.18, stdev=361.46 00:26:40.403 lat (usec): min=40827, max=43268, avg=41090.81, stdev=361.74 00:26:40.403 clat percentiles (usec): 00:26:40.403 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:40.403 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:40.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:26:40.403 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:26:40.403 | 99.99th=[43254] 00:26:40.403 bw ( KiB/s): min= 384, max= 416, per=33.88%, avg=388.80, stdev=11.72, samples=20 00:26:40.403 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:40.403 lat (msec) : 50=100.00% 00:26:40.403 cpu : usr=94.23%, sys=5.50%, ctx=9, majf=0, minf=103 00:26:40.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.403 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:40.403 filename1: (groupid=0, jobs=1): err= 0: pid=1043321: Fri Jul 12 12:03:29 2024 00:26:40.403 read: IOPS=189, BW=756KiB/s (774kB/s)(7584KiB/10029msec) 00:26:40.403 slat (nsec): min=7349, max=53651, avg=9583.73, stdev=2778.63 00:26:40.403 clat (usec): min=546, max=43068, avg=21126.63, stdev=20422.60 00:26:40.403 lat (usec): min=555, max=43102, avg=21136.21, stdev=20422.30 00:26:40.403 clat percentiles (usec): 00:26:40.403 | 1.00th=[ 570], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 594], 00:26:40.403 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[41157], 60.00th=[41157], 00:26:40.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:26:40.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:26:40.403 | 99.99th=[43254] 00:26:40.403 bw ( KiB/s): min= 672, max= 768, per=66.02%, avg=756.80, stdev=26.01, samples=20 00:26:40.403 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:26:40.403 lat (usec) : 750=45.57%, 1000=4.22% 00:26:40.403 lat (msec) : 50=50.21% 00:26:40.403 cpu : usr=94.61%, sys=5.11%, ctx=13, majf=0, minf=208 00:26:40.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.403 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:40.403 00:26:40.403 Run status group 0 (all jobs): 00:26:40.403 READ: bw=1145KiB/s (1173kB/s), 389KiB/s-756KiB/s (398kB/s-774kB/s), io=11.2MiB (11.8MB), run=10029-10032msec 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 00:26:40.661 real 0m11.457s 00:26:40.661 user 0m20.454s 00:26:40.661 sys 0m1.375s 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 ************************************ 00:26:40.661 END TEST fio_dif_1_multi_subsystems 00:26:40.661 ************************************ 00:26:40.661 12:03:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:40.661 12:03:30 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:40.661 12:03:30 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 ************************************ 00:26:40.661 START TEST fio_dif_rand_params 00:26:40.661 ************************************ 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 bdev_null0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.661 [2024-07-12 12:03:30.096761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:40.661 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:40.661 { 00:26:40.661 "params": { 00:26:40.661 "name": "Nvme$subsystem", 00:26:40.661 "trtype": "$TEST_TRANSPORT", 00:26:40.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.661 "adrfam": "ipv4", 00:26:40.661 "trsvcid": "$NVMF_PORT", 00:26:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.662 "hdgst": ${hdgst:-false}, 00:26:40.662 "ddgst": ${ddgst:-false} 00:26:40.662 }, 00:26:40.662 "method": "bdev_nvme_attach_controller" 00:26:40.662 } 00:26:40.662 EOF 00:26:40.662 )") 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:40.662 "params": { 00:26:40.662 "name": "Nvme0", 00:26:40.662 "trtype": "tcp", 00:26:40.662 "traddr": "10.0.0.2", 00:26:40.662 "adrfam": "ipv4", 00:26:40.662 "trsvcid": "4420", 00:26:40.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:40.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:40.662 "hdgst": false, 00:26:40.662 "ddgst": false 00:26:40.662 }, 00:26:40.662 "method": "bdev_nvme_attach_controller" 00:26:40.662 }' 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:40.662 12:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.921 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:40.921 ... 00:26:40.921 fio-3.35 00:26:40.921 Starting 3 threads 00:26:40.921 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.476 00:26:47.476 filename0: (groupid=0, jobs=1): err= 0: pid=1044721: Fri Jul 12 12:03:36 2024 00:26:47.476 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(150MiB/5008msec) 00:26:47.476 slat (nsec): min=4841, max=83774, avg=17624.62, stdev=5040.64 00:26:47.476 clat (usec): min=4862, max=55472, avg=12505.42, stdev=6104.11 00:26:47.476 lat (usec): min=4870, max=55488, avg=12523.04, stdev=6103.36 00:26:47.476 clat percentiles (usec): 00:26:47.476 | 1.00th=[ 6194], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:26:47.476 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:26:47.476 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13173], 95.00th=[13829], 00:26:47.476 | 99.00th=[50594], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:26:47.476 | 99.99th=[55313] 00:26:47.476 bw ( KiB/s): min=16896, max=34048, per=36.49%, avg=30617.60, stdev=4986.27, samples=10 00:26:47.476 iops : min= 132, max= 266, avg=239.20, stdev=38.96, samples=10 00:26:47.476 lat (msec) : 10=6.51%, 20=90.99%, 50=1.25%, 100=1.25% 00:26:47.476 cpu : usr=93.15%, sys=6.31%, ctx=21, majf=0, minf=177 00:26:47.476 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.476 filename0: (groupid=0, jobs=1): err= 0: pid=1044722: Fri Jul 12 12:03:36 2024 00:26:47.476 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5007msec) 00:26:47.476 slat (nsec): min=5000, max=47356, avg=18450.94, stdev=5029.30 00:26:47.476 clat (usec): min=4888, max=48644, avg=13481.78, stdev=4256.18 00:26:47.476 lat (usec): min=4896, max=48660, avg=13500.23, stdev=4256.26 00:26:47.476 clat percentiles (usec): 00:26:47.476 | 1.00th=[ 5342], 5.00th=[ 7832], 10.00th=[10421], 20.00th=[11863], 00:26:47.476 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:26:47.476 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15664], 95.00th=[16450], 00:26:47.476 | 99.00th=[45351], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:26:47.476 | 99.99th=[48497] 00:26:47.476 bw ( KiB/s): min=27392, max=30464, per=33.84%, avg=28395.90, stdev=1217.89, samples=10 00:26:47.476 iops : min= 214, max= 238, avg=221.80, stdev= 9.54, samples=10 00:26:47.476 lat (msec) : 10=9.71%, 20=89.21%, 50=1.08% 00:26:47.476 cpu : usr=91.91%, sys=6.69%, ctx=154, majf=0, minf=80 00:26:47.476 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.476 filename0: (groupid=0, jobs=1): err= 0: pid=1044723: Fri Jul 12 12:03:36 2024 00:26:47.476 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(125MiB/5048msec) 00:26:47.476 slat (nsec): min=4809, max=40596, avg=17033.70, stdev=4404.94 00:26:47.476 clat (usec): min=4236, max=89463, avg=15107.65, stdev=5165.36 00:26:47.476 lat (usec): min=4250, max=89478, avg=15124.68, stdev=5165.55 00:26:47.476 clat percentiles (usec): 00:26:47.476 | 1.00th=[ 5080], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[12911], 00:26:47.476 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15139], 60.00th=[15795], 00:26:47.476 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:26:47.476 | 99.00th=[47449], 99.50th=[49021], 99.90th=[89654], 99.95th=[89654], 00:26:47.476 | 99.99th=[89654] 00:26:47.476 bw ( KiB/s): min=22784, max=27136, per=30.36%, avg=25472.00, stdev=1208.30, samples=10 00:26:47.476 iops : min= 178, max= 212, avg=199.00, stdev= 9.44, samples=10 00:26:47.476 lat (msec) : 10=7.72%, 20=90.98%, 50=1.10%, 100=0.20% 00:26:47.476 cpu : usr=91.18%, sys=6.14%, ctx=402, majf=0, minf=58 00:26:47.476 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.476 issued rwts: total=998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.476 00:26:47.476 Run status group 0 (all jobs): 00:26:47.476 READ: bw=81.9MiB/s (85.9MB/s), 24.7MiB/s-29.9MiB/s (25.9MB/s-31.4MB/s), io=414MiB (434MB), run=5007-5048msec 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 bdev_null0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 [2024-07-12 12:03:36.306057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.476 bdev_null1 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:47.476 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 bdev_null2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:47.477 { 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme$subsystem", 00:26:47.477 "trtype": "$TEST_TRANSPORT", 00:26:47.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "$NVMF_PORT", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.477 "hdgst": ${hdgst:-false}, 00:26:47.477 "ddgst": ${ddgst:-false} 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 } 00:26:47.477 EOF 00:26:47.477 )") 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:47.477 { 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme$subsystem", 00:26:47.477 "trtype": "$TEST_TRANSPORT", 00:26:47.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "$NVMF_PORT", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.477 "hdgst": ${hdgst:-false}, 00:26:47.477 "ddgst": ${ddgst:-false} 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 } 00:26:47.477 EOF 00:26:47.477 )") 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:47.477 { 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme$subsystem", 00:26:47.477 "trtype": "$TEST_TRANSPORT", 00:26:47.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "$NVMF_PORT", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.477 "hdgst": ${hdgst:-false}, 00:26:47.477 "ddgst": ${ddgst:-false} 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 } 00:26:47.477 EOF 00:26:47.477 )") 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme0", 00:26:47.477 "trtype": "tcp", 00:26:47.477 "traddr": "10.0.0.2", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "4420", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:47.477 "hdgst": false, 00:26:47.477 "ddgst": false 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 },{ 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme1", 00:26:47.477 "trtype": "tcp", 00:26:47.477 "traddr": "10.0.0.2", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "4420", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:47.477 "hdgst": false, 00:26:47.477 "ddgst": false 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 },{ 00:26:47.477 "params": { 00:26:47.477 "name": "Nvme2", 00:26:47.477 "trtype": "tcp", 00:26:47.477 "traddr": "10.0.0.2", 00:26:47.477 "adrfam": "ipv4", 00:26:47.477 "trsvcid": "4420", 00:26:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:47.477 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:47.477 "hdgst": false, 00:26:47.477 "ddgst": false 00:26:47.477 }, 00:26:47.477 "method": "bdev_nvme_attach_controller" 00:26:47.477 }' 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:47.477 12:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:47.477 ... 00:26:47.477 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:47.477 ... 00:26:47.477 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:47.477 ... 00:26:47.477 fio-3.35 00:26:47.477 Starting 24 threads 00:26:47.478 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.687 00:26:59.687 filename0: (groupid=0, jobs=1): err= 0: pid=1045584: Fri Jul 12 12:03:47 2024 00:26:59.687 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.7MiB/10038msec) 00:26:59.687 slat (usec): min=8, max=100, avg=30.60, stdev=11.05 00:26:59.687 clat (usec): min=25916, max=99206, avg=33295.87, stdev=4148.35 00:26:59.687 lat (usec): min=25982, max=99231, avg=33326.48, stdev=4148.04 00:26:59.687 clat percentiles (usec): 00:26:59.687 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.687 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.687 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.687 | 99.00th=[37487], 99.50th=[58983], 99.90th=[99091], 99.95th=[99091], 00:26:59.687 | 99.99th=[99091] 00:26:59.687 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1907.20, stdev=91.93, samples=20 00:26:59.687 iops : min= 416, max= 512, avg=476.80, stdev=22.98, samples=20 00:26:59.687 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.687 cpu : usr=98.31%, sys=1.30%, ctx=17, majf=0, minf=40 00:26:59.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.687 filename0: (groupid=0, jobs=1): err= 0: pid=1045585: Fri Jul 12 12:03:47 2024 00:26:59.687 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.7MiB/10034msec) 00:26:59.687 slat (usec): min=8, max=122, avg=32.95, stdev=13.99 00:26:59.687 clat (usec): min=26660, max=99187, avg=33264.61, stdev=4078.92 00:26:59.687 lat (usec): min=26718, max=99212, avg=33297.55, stdev=4079.18 00:26:59.687 clat percentiles (usec): 00:26:59.687 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.687 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.687 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.687 | 99.00th=[37487], 99.50th=[55837], 99.90th=[99091], 99.95th=[99091], 00:26:59.687 | 99.99th=[99091] 00:26:59.687 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.00, stdev=42.67, samples=19 00:26:59.687 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:26:59.687 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.687 cpu : usr=96.48%, sys=2.16%, ctx=229, majf=0, minf=45 00:26:59.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.687 filename0: (groupid=0, jobs=1): err= 0: pid=1045586: Fri Jul 12 12:03:47 2024 00:26:59.687 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10084msec) 00:26:59.687 slat (nsec): min=6127, max=71116, avg=37636.94, stdev=9532.36 00:26:59.687 clat (msec): min=32, max=109, avg=33.27, stdev= 4.65 00:26:59.687 lat (msec): min=32, max=109, avg=33.31, stdev= 4.65 00:26:59.687 clat percentiles (msec): 00:26:59.687 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.687 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.687 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.687 | 99.00th=[ 36], 99.50th=[ 59], 99.90th=[ 109], 99.95th=[ 110], 00:26:59.687 | 99.99th=[ 110] 00:26:59.687 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.687 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.687 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.687 cpu : usr=97.13%, sys=1.96%, ctx=86, majf=0, minf=27 00:26:59.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.687 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.687 filename0: (groupid=0, jobs=1): err= 0: pid=1045587: Fri Jul 12 12:03:47 2024 00:26:59.687 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10013msec) 00:26:59.687 slat (usec): min=4, max=112, avg=30.14, stdev=26.22 00:26:59.687 clat (usec): min=2880, max=37185, avg=32572.98, stdev=3293.95 00:26:59.687 lat (usec): min=2888, max=37205, avg=32603.12, stdev=3294.51 00:26:59.687 clat percentiles (usec): 00:26:59.688 | 1.00th=[10814], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:59.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:59.688 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:26:59.688 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:26:59.688 | 99.99th=[36963] 00:26:59.688 bw ( KiB/s): min= 1916, max= 2432, per=4.25%, avg=1945.40, stdev=114.54, samples=20 00:26:59.688 iops : min= 479, max= 608, avg=486.35, stdev=28.63, samples=20 00:26:59.688 lat (msec) : 4=0.33%, 10=0.66%, 20=0.66%, 50=98.36% 00:26:59.688 cpu : usr=98.07%, sys=1.39%, ctx=72, majf=0, minf=28 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename0: (groupid=0, jobs=1): err= 0: pid=1045588: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.9MiB/10104msec) 00:26:59.688 slat (nsec): min=8079, max=73115, avg=21805.68, stdev=13129.87 00:26:59.688 clat (msec): min=6, max=108, avg=33.18, stdev= 4.71 00:26:59.688 lat (msec): min=6, max=108, avg=33.20, stdev= 4.71 00:26:59.688 clat percentiles (msec): 00:26:59.688 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.688 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:59.688 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.688 | 99.00th=[ 35], 99.50th=[ 37], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.688 | 99.99th=[ 109] 00:26:59.688 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1932.80, stdev=57.24, samples=20 00:26:59.688 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:26:59.688 lat (msec) : 10=0.33%, 20=0.33%, 50=99.01%, 250=0.33% 00:26:59.688 cpu : usr=96.60%, sys=2.15%, ctx=268, majf=0, minf=47 00:26:59.688 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename0: (groupid=0, jobs=1): err= 0: pid=1045589: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10083msec) 00:26:59.688 slat (nsec): min=4070, max=63527, avg=29245.31, stdev=11500.28 00:26:59.688 clat (msec): min=32, max=101, avg=33.33, stdev= 4.39 00:26:59.688 lat (msec): min=32, max=101, avg=33.36, stdev= 4.39 00:26:59.688 clat percentiles (msec): 00:26:59.688 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.688 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.688 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.688 | 99.00th=[ 36], 99.50th=[ 67], 99.90th=[ 102], 99.95th=[ 102], 00:26:59.688 | 99.99th=[ 102] 00:26:59.688 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1910.40, stdev=66.37, samples=20 00:26:59.688 iops : min= 416, max= 512, avg=477.60, stdev=16.59, samples=20 00:26:59.688 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.688 cpu : usr=97.85%, sys=1.57%, ctx=92, majf=0, minf=37 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename0: (groupid=0, jobs=1): err= 0: pid=1045590: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10083msec) 00:26:59.688 slat (nsec): min=6019, max=77239, avg=37449.57, stdev=11312.02 00:26:59.688 clat (msec): min=32, max=108, avg=33.29, stdev= 4.62 00:26:59.688 lat (msec): min=32, max=108, avg=33.33, stdev= 4.62 00:26:59.688 clat percentiles (msec): 00:26:59.688 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.688 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.688 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.688 | 99.00th=[ 36], 99.50th=[ 59], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.688 | 99.99th=[ 109] 00:26:59.688 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.688 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.688 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.688 cpu : usr=96.30%, sys=2.27%, ctx=253, majf=0, minf=57 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename0: (groupid=0, jobs=1): err= 0: pid=1045591: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:26:59.688 slat (nsec): min=4038, max=67387, avg=30218.85, stdev=11130.63 00:26:59.688 clat (usec): min=14521, max=37295, avg=32841.92, stdev=1515.75 00:26:59.688 lat (usec): min=14536, max=37318, avg=32872.14, stdev=1517.23 00:26:59.688 clat percentiles (usec): 00:26:59.688 | 1.00th=[28181], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.688 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.688 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.688 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:26:59.688 | 99.99th=[37487] 00:26:59.688 bw ( KiB/s): min= 1792, max= 2052, per=4.23%, avg=1933.68, stdev=59.17, samples=19 00:26:59.688 iops : min= 448, max= 513, avg=483.42, stdev=14.79, samples=19 00:26:59.688 lat (msec) : 20=0.66%, 50=99.34% 00:26:59.688 cpu : usr=96.44%, sys=2.11%, ctx=169, majf=0, minf=43 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename1: (groupid=0, jobs=1): err= 0: pid=1045592: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10006msec) 00:26:59.688 slat (nsec): min=8980, max=62337, avg=30086.28, stdev=10570.93 00:26:59.688 clat (usec): min=14517, max=47246, avg=32860.03, stdev=1540.69 00:26:59.688 lat (usec): min=14534, max=47269, avg=32890.12, stdev=1541.76 00:26:59.688 clat percentiles (usec): 00:26:59.688 | 1.00th=[28443], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.688 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.688 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.688 | 99.00th=[34866], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:26:59.688 | 99.99th=[47449] 00:26:59.688 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1933.47, stdev=58.73, samples=19 00:26:59.688 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:26:59.688 lat (msec) : 20=0.70%, 50=99.30% 00:26:59.688 cpu : usr=97.06%, sys=1.96%, ctx=129, majf=0, minf=55 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename1: (groupid=0, jobs=1): err= 0: pid=1045593: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10013msec) 00:26:59.688 slat (nsec): min=3953, max=67079, avg=28688.00, stdev=11633.04 00:26:59.688 clat (usec): min=3015, max=47614, avg=32586.83, stdev=3345.35 00:26:59.688 lat (usec): min=3026, max=47629, avg=32615.52, stdev=3346.82 00:26:59.688 clat percentiles (usec): 00:26:59.688 | 1.00th=[ 9896], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.688 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.688 | 99.00th=[34866], 99.50th=[35390], 99.90th=[37487], 99.95th=[47449], 00:26:59.688 | 99.99th=[47449] 00:26:59.688 bw ( KiB/s): min= 1916, max= 2432, per=4.25%, avg=1945.40, stdev=114.54, samples=20 00:26:59.688 iops : min= 479, max= 608, avg=486.35, stdev=28.63, samples=20 00:26:59.688 lat (msec) : 4=0.33%, 10=0.70%, 20=0.70%, 50=98.28% 00:26:59.688 cpu : usr=96.05%, sys=2.56%, ctx=214, majf=0, minf=35 00:26:59.688 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename1: (groupid=0, jobs=1): err= 0: pid=1045594: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10084msec) 00:26:59.688 slat (usec): min=7, max=121, avg=50.89, stdev=21.33 00:26:59.688 clat (msec): min=23, max=109, avg=33.15, stdev= 4.69 00:26:59.688 lat (msec): min=23, max=109, avg=33.20, stdev= 4.69 00:26:59.688 clat percentiles (msec): 00:26:59.688 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.688 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.688 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.688 | 99.00th=[ 36], 99.50th=[ 61], 99.90th=[ 109], 99.95th=[ 110], 00:26:59.688 | 99.99th=[ 110] 00:26:59.688 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.688 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.688 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.688 cpu : usr=98.14%, sys=1.27%, ctx=68, majf=0, minf=36 00:26:59.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.688 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.688 filename1: (groupid=0, jobs=1): err= 0: pid=1045595: Fri Jul 12 12:03:47 2024 00:26:59.688 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.7MiB/10034msec) 00:26:59.688 slat (usec): min=8, max=121, avg=35.36, stdev=18.20 00:26:59.688 clat (usec): min=26919, max=99153, avg=33224.55, stdev=4060.21 00:26:59.688 lat (usec): min=26992, max=99179, avg=33259.91, stdev=4060.13 00:26:59.688 clat percentiles (usec): 00:26:59.688 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:59.689 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.689 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.689 | 99.00th=[37487], 99.50th=[55313], 99.90th=[99091], 99.95th=[99091], 00:26:59.689 | 99.99th=[99091] 00:26:59.689 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.00, stdev=42.67, samples=19 00:26:59.689 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:26:59.689 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.689 cpu : usr=93.56%, sys=3.83%, ctx=651, majf=0, minf=43 00:26:59.689 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename1: (groupid=0, jobs=1): err= 0: pid=1045596: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10084msec) 00:26:59.689 slat (nsec): min=9793, max=97445, avg=38137.69, stdev=12233.52 00:26:59.689 clat (msec): min=22, max=108, avg=33.29, stdev= 4.64 00:26:59.689 lat (msec): min=22, max=109, avg=33.33, stdev= 4.64 00:26:59.689 clat percentiles (msec): 00:26:59.689 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.689 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.689 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.689 | 99.00th=[ 37], 99.50th=[ 59], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.689 | 99.99th=[ 109] 00:26:59.689 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.689 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.689 cpu : usr=98.33%, sys=1.28%, ctx=17, majf=0, minf=31 00:26:59.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename1: (groupid=0, jobs=1): err= 0: pid=1045597: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.7MiB/10053msec) 00:26:59.689 slat (nsec): min=7734, max=59396, avg=26797.87, stdev=10606.16 00:26:59.689 clat (usec): min=24355, max=99063, avg=33401.22, stdev=4499.29 00:26:59.689 lat (usec): min=24365, max=99083, avg=33428.02, stdev=4498.83 00:26:59.689 clat percentiles (usec): 00:26:59.689 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:26:59.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:59.689 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.689 | 99.00th=[37487], 99.50th=[73925], 99.90th=[99091], 99.95th=[99091], 00:26:59.689 | 99.99th=[99091] 00:26:59.689 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1905.40, stdev=74.18, samples=20 00:26:59.689 iops : min= 416, max= 512, avg=476.35, stdev=18.55, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.689 cpu : usr=98.30%, sys=1.30%, ctx=19, majf=0, minf=29 00:26:59.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename1: (groupid=0, jobs=1): err= 0: pid=1045598: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10083msec) 00:26:59.689 slat (nsec): min=8743, max=86150, avg=35886.41, stdev=12345.62 00:26:59.689 clat (msec): min=22, max=108, avg=33.32, stdev= 4.63 00:26:59.689 lat (msec): min=22, max=108, avg=33.36, stdev= 4.63 00:26:59.689 clat percentiles (msec): 00:26:59.689 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.689 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.689 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.689 | 99.00th=[ 36], 99.50th=[ 59], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.689 | 99.99th=[ 109] 00:26:59.689 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.689 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.689 cpu : usr=98.10%, sys=1.52%, ctx=15, majf=0, minf=45 00:26:59.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename1: (groupid=0, jobs=1): err= 0: pid=1045599: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.7MiB/10091msec) 00:26:59.689 slat (nsec): min=6536, max=88176, avg=37046.51, stdev=12048.96 00:26:59.689 clat (msec): min=27, max=128, avg=33.35, stdev= 5.26 00:26:59.689 lat (msec): min=27, max=128, avg=33.38, stdev= 5.25 00:26:59.689 clat percentiles (msec): 00:26:59.689 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.689 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.689 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.689 | 99.00th=[ 37], 99.50th=[ 70], 99.90th=[ 129], 99.95th=[ 129], 00:26:59.689 | 99.99th=[ 129] 00:26:59.689 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1905.05, stdev=85.67, samples=20 00:26:59.689 iops : min= 416, max= 512, avg=476.25, stdev=21.44, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.689 cpu : usr=98.19%, sys=1.43%, ctx=15, majf=0, minf=34 00:26:59.689 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename2: (groupid=0, jobs=1): err= 0: pid=1045600: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10084msec) 00:26:59.689 slat (nsec): min=14109, max=93638, avg=38152.44, stdev=12846.51 00:26:59.689 clat (msec): min=32, max=108, avg=33.27, stdev= 4.62 00:26:59.689 lat (msec): min=32, max=108, avg=33.31, stdev= 4.62 00:26:59.689 clat percentiles (msec): 00:26:59.689 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.689 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.689 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.689 | 99.00th=[ 36], 99.50th=[ 59], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.689 | 99.99th=[ 109] 00:26:59.689 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.689 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.689 cpu : usr=98.17%, sys=1.43%, ctx=15, majf=0, minf=25 00:26:59.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename2: (groupid=0, jobs=1): err= 0: pid=1045601: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.7MiB/10053msec) 00:26:59.689 slat (usec): min=7, max=112, avg=32.52, stdev=25.03 00:26:59.689 clat (usec): min=23809, max=98820, avg=33366.62, stdev=4755.56 00:26:59.689 lat (usec): min=23821, max=98838, avg=33399.14, stdev=4754.51 00:26:59.689 clat percentiles (usec): 00:26:59.689 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:59.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:59.689 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:59.689 | 99.00th=[41681], 99.50th=[73925], 99.90th=[99091], 99.95th=[99091], 00:26:59.689 | 99.99th=[99091] 00:26:59.689 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1905.40, stdev=74.36, samples=20 00:26:59.689 iops : min= 416, max= 512, avg=476.35, stdev=18.59, samples=20 00:26:59.689 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.689 cpu : usr=98.29%, sys=1.32%, ctx=8, majf=0, minf=39 00:26:59.689 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename2: (groupid=0, jobs=1): err= 0: pid=1045602: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.2MiB/10064msec) 00:26:59.689 slat (nsec): min=6621, max=73380, avg=30661.23, stdev=14039.86 00:26:59.689 clat (msec): min=14, max=101, avg=32.42, stdev= 5.73 00:26:59.689 lat (msec): min=14, max=101, avg=32.45, stdev= 5.74 00:26:59.689 clat percentiles (msec): 00:26:59.689 | 1.00th=[ 20], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 33], 00:26:59.689 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.689 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:59.689 | 99.00th=[ 46], 99.50th=[ 70], 99.90th=[ 102], 99.95th=[ 102], 00:26:59.689 | 99.99th=[ 102] 00:26:59.689 bw ( KiB/s): min= 1664, max= 2288, per=4.29%, avg=1964.25, stdev=159.94, samples=20 00:26:59.689 iops : min= 416, max= 572, avg=491.05, stdev=40.00, samples=20 00:26:59.689 lat (msec) : 20=1.18%, 50=98.17%, 100=0.32%, 250=0.32% 00:26:59.689 cpu : usr=98.45%, sys=1.16%, ctx=15, majf=0, minf=30 00:26:59.689 IO depths : 1=3.1%, 2=8.0%, 4=20.6%, 8=58.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:59.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.689 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.689 filename2: (groupid=0, jobs=1): err= 0: pid=1045603: Fri Jul 12 12:03:47 2024 00:26:59.689 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10007msec) 00:26:59.689 slat (nsec): min=8227, max=76507, avg=27495.29, stdev=10454.78 00:26:59.689 clat (usec): min=18096, max=48161, avg=32892.90, stdev=2972.76 00:26:59.689 lat (usec): min=18107, max=48184, avg=32920.40, stdev=2974.36 00:26:59.689 clat percentiles (usec): 00:26:59.689 | 1.00th=[19006], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:59.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.689 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:59.690 | 99.00th=[46924], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:26:59.690 | 99.99th=[47973] 00:26:59.690 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1933.47, stdev=58.73, samples=19 00:26:59.690 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:26:59.690 lat (msec) : 20=2.21%, 50=97.79% 00:26:59.690 cpu : usr=98.41%, sys=1.19%, ctx=19, majf=0, minf=32 00:26:59.690 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:59.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.690 filename2: (groupid=0, jobs=1): err= 0: pid=1045604: Fri Jul 12 12:03:47 2024 00:26:59.690 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.9MiB/10082msec) 00:26:59.690 slat (nsec): min=5967, max=70587, avg=11607.97, stdev=4095.92 00:26:59.690 clat (usec): min=17031, max=98442, avg=33269.71, stdev=3962.98 00:26:59.690 lat (usec): min=17042, max=98465, avg=33281.32, stdev=3963.33 00:26:59.690 clat percentiles (usec): 00:26:59.690 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:59.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:59.690 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:26:59.690 | 99.00th=[35390], 99.50th=[38011], 99.90th=[98042], 99.95th=[98042], 00:26:59.690 | 99.99th=[98042] 00:26:59.690 bw ( KiB/s): min= 1920, max= 2048, per=4.21%, avg=1926.55, stdev=28.59, samples=20 00:26:59.690 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:26:59.690 lat (msec) : 20=0.35%, 50=99.32%, 100=0.33% 00:26:59.690 cpu : usr=98.14%, sys=1.49%, ctx=17, majf=0, minf=34 00:26:59.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:59.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.690 filename2: (groupid=0, jobs=1): err= 0: pid=1045605: Fri Jul 12 12:03:47 2024 00:26:59.690 read: IOPS=475, BW=1901KiB/s (1947kB/s)(18.7MiB/10065msec) 00:26:59.690 slat (usec): min=6, max=106, avg=36.67, stdev=11.62 00:26:59.690 clat (msec): min=22, max=109, avg=33.32, stdev= 5.19 00:26:59.690 lat (msec): min=22, max=109, avg=33.36, stdev= 5.19 00:26:59.690 clat percentiles (msec): 00:26:59.690 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.690 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.690 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.690 | 99.00th=[ 45], 99.50th=[ 73], 99.90th=[ 109], 99.95th=[ 109], 00:26:59.690 | 99.99th=[ 109] 00:26:59.690 bw ( KiB/s): min= 1664, max= 2043, per=4.17%, avg=1906.95, stdev=70.40, samples=20 00:26:59.690 iops : min= 416, max= 510, avg=476.70, stdev=17.52, samples=20 00:26:59.690 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.690 cpu : usr=96.46%, sys=2.04%, ctx=136, majf=0, minf=53 00:26:59.690 IO depths : 1=5.6%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:59.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.690 filename2: (groupid=0, jobs=1): err= 0: pid=1045606: Fri Jul 12 12:03:47 2024 00:26:59.690 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.8MiB/10084msec) 00:26:59.690 slat (usec): min=12, max=126, avg=43.55, stdev=18.48 00:26:59.690 clat (msec): min=30, max=109, avg=33.21, stdev= 4.67 00:26:59.690 lat (msec): min=30, max=109, avg=33.25, stdev= 4.67 00:26:59.690 clat percentiles (msec): 00:26:59.690 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:59.690 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:59.690 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:26:59.690 | 99.00th=[ 36], 99.50th=[ 60], 99.90th=[ 109], 99.95th=[ 110], 00:26:59.690 | 99.99th=[ 110] 00:26:59.690 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:26:59.690 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:59.690 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:26:59.690 cpu : usr=97.71%, sys=1.65%, ctx=36, majf=0, minf=38 00:26:59.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.690 filename2: (groupid=0, jobs=1): err= 0: pid=1045607: Fri Jul 12 12:03:47 2024 00:26:59.690 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.7MiB/10038msec) 00:26:59.690 slat (usec): min=8, max=113, avg=41.69, stdev=23.92 00:26:59.690 clat (usec): min=31374, max=99200, avg=33197.42, stdev=4139.13 00:26:59.690 lat (usec): min=31455, max=99221, avg=33239.11, stdev=4137.14 00:26:59.690 clat percentiles (usec): 00:26:59.690 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:59.690 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:59.690 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:26:59.690 | 99.00th=[36439], 99.50th=[59507], 99.90th=[99091], 99.95th=[99091], 00:26:59.690 | 99.99th=[99091] 00:26:59.690 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1907.20, stdev=82.01, samples=20 00:26:59.690 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:26:59.690 lat (msec) : 50=99.33%, 100=0.67% 00:26:59.690 cpu : usr=96.59%, sys=2.17%, ctx=127, majf=0, minf=41 00:26:59.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:59.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.690 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:59.690 00:26:59.690 Run status group 0 (all jobs): 00:26:59.690 READ: bw=44.7MiB/s (46.8MB/s), 1895KiB/s-1959KiB/s (1940kB/s-2006kB/s), io=451MiB (473MB), run=10005-10104msec 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 bdev_null0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.690 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 [2024-07-12 12:03:47.957069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 bdev_null1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.691 { 00:26:59.691 "params": { 00:26:59.691 "name": "Nvme$subsystem", 00:26:59.691 "trtype": "$TEST_TRANSPORT", 00:26:59.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.691 "adrfam": "ipv4", 00:26:59.691 "trsvcid": "$NVMF_PORT", 00:26:59.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.691 "hdgst": ${hdgst:-false}, 00:26:59.691 "ddgst": ${ddgst:-false} 00:26:59.691 }, 00:26:59.691 "method": "bdev_nvme_attach_controller" 00:26:59.691 } 00:26:59.691 EOF 00:26:59.691 )") 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.691 { 00:26:59.691 "params": { 00:26:59.691 "name": "Nvme$subsystem", 00:26:59.691 "trtype": "$TEST_TRANSPORT", 00:26:59.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.691 "adrfam": "ipv4", 00:26:59.691 "trsvcid": "$NVMF_PORT", 00:26:59.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.691 "hdgst": ${hdgst:-false}, 00:26:59.691 "ddgst": ${ddgst:-false} 00:26:59.691 }, 00:26:59.691 "method": "bdev_nvme_attach_controller" 00:26:59.691 } 00:26:59.691 EOF 00:26:59.691 )") 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:59.691 12:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.691 "params": { 00:26:59.691 "name": "Nvme0", 00:26:59.691 "trtype": "tcp", 00:26:59.691 "traddr": "10.0.0.2", 00:26:59.691 "adrfam": "ipv4", 00:26:59.691 "trsvcid": "4420", 00:26:59.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.691 "hdgst": false, 00:26:59.691 "ddgst": false 00:26:59.691 }, 00:26:59.691 "method": "bdev_nvme_attach_controller" 00:26:59.691 },{ 00:26:59.691 "params": { 00:26:59.691 "name": "Nvme1", 00:26:59.691 "trtype": "tcp", 00:26:59.691 "traddr": "10.0.0.2", 00:26:59.691 "adrfam": "ipv4", 00:26:59.691 "trsvcid": "4420", 00:26:59.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.691 "hdgst": false, 00:26:59.691 "ddgst": false 00:26:59.691 }, 00:26:59.691 "method": "bdev_nvme_attach_controller" 00:26:59.691 }' 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:59.691 12:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.691 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:59.691 ... 00:26:59.691 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:59.691 ... 00:26:59.691 fio-3.35 00:26:59.691 Starting 4 threads 00:26:59.691 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.952 00:27:04.952 filename0: (groupid=0, jobs=1): err= 0: pid=1046991: Fri Jul 12 12:03:54 2024 00:27:04.952 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5003msec) 00:27:04.952 slat (nsec): min=3943, max=31908, avg=12009.48, stdev=3751.41 00:27:04.952 clat (usec): min=739, max=10400, avg=4063.25, stdev=340.67 00:27:04.952 lat (usec): min=753, max=10427, avg=4075.26, stdev=340.65 00:27:04.952 clat percentiles (usec): 00:27:04.952 | 1.00th=[ 3228], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 3982], 00:27:04.952 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:27:04.952 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4359], 00:27:04.952 | 99.00th=[ 5014], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[10290], 00:27:04.952 | 99.99th=[10421] 00:27:04.952 bw ( KiB/s): min=15360, max=15888, per=25.11%, avg=15598.40, stdev=197.47, samples=10 00:27:04.952 iops : min= 1920, max= 1986, avg=1949.80, stdev=24.68, samples=10 00:27:04.952 lat (usec) : 750=0.01% 00:27:04.952 lat (msec) : 2=0.08%, 4=23.91%, 10=75.91%, 20=0.08% 00:27:04.952 cpu : usr=93.86%, sys=5.68%, ctx=9, majf=0, minf=100 00:27:04.952 IO depths : 1=0.4%, 2=11.5%, 4=59.7%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 issued rwts: total=9757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:04.952 filename0: (groupid=0, jobs=1): err= 0: pid=1046992: Fri Jul 12 12:03:54 2024 00:27:04.952 read: IOPS=1941, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5001msec) 00:27:04.952 slat (nsec): min=3950, max=31623, avg=14617.03, stdev=3143.27 00:27:04.952 clat (usec): min=785, max=8452, avg=4062.28, stdev=529.12 00:27:04.952 lat (usec): min=798, max=8464, avg=4076.90, stdev=529.15 00:27:04.952 clat percentiles (usec): 00:27:04.952 | 1.00th=[ 1909], 5.00th=[ 3621], 10.00th=[ 3884], 20.00th=[ 3982], 00:27:04.952 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:27:04.952 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4178], 95.00th=[ 4359], 00:27:04.952 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 8291], 00:27:04.952 | 99.99th=[ 8455] 00:27:04.952 bw ( KiB/s): min=15312, max=15760, per=25.01%, avg=15537.78, stdev=173.15, samples=9 00:27:04.952 iops : min= 1914, max= 1970, avg=1942.22, stdev=21.64, samples=9 00:27:04.952 lat (usec) : 1000=0.14% 00:27:04.952 lat (msec) : 2=0.96%, 4=26.49%, 10=72.40% 00:27:04.952 cpu : usr=94.18%, sys=5.32%, ctx=9, majf=0, minf=74 00:27:04.952 IO depths : 1=1.2%, 2=22.4%, 4=51.8%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 issued rwts: total=9708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:04.952 filename1: (groupid=0, jobs=1): err= 0: pid=1046993: Fri Jul 12 12:03:54 2024 00:27:04.952 read: IOPS=1929, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5002msec) 00:27:04.952 slat (nsec): min=3963, max=55799, avg=16684.29, stdev=5844.58 00:27:04.952 clat (usec): min=788, max=12928, avg=4075.60, stdev=492.37 00:27:04.952 lat (usec): min=803, max=12940, avg=4092.28, stdev=492.12 00:27:04.952 clat percentiles (usec): 00:27:04.952 | 1.00th=[ 2606], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 3949], 00:27:04.952 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:27:04.952 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4178], 95.00th=[ 4555], 00:27:04.952 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[10552], 00:27:04.952 | 99.99th=[12911] 00:27:04.952 bw ( KiB/s): min=15136, max=15792, per=24.85%, avg=15433.40, stdev=187.83, samples=10 00:27:04.952 iops : min= 1892, max= 1974, avg=1929.10, stdev=23.53, samples=10 00:27:04.952 lat (usec) : 1000=0.08% 00:27:04.952 lat (msec) : 2=0.55%, 4=31.81%, 10=67.51%, 20=0.05% 00:27:04.952 cpu : usr=91.12%, sys=6.76%, ctx=169, majf=0, minf=57 00:27:04.952 IO depths : 1=1.8%, 2=22.4%, 4=51.8%, 8=24.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 issued rwts: total=9651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:04.952 filename1: (groupid=0, jobs=1): err= 0: pid=1046994: Fri Jul 12 12:03:54 2024 00:27:04.952 read: IOPS=1945, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5001msec) 00:27:04.952 slat (nsec): min=3893, max=36540, avg=15171.52, stdev=3729.69 00:27:04.952 clat (usec): min=870, max=8598, avg=4052.46, stdev=468.83 00:27:04.952 lat (usec): min=892, max=8611, avg=4067.63, stdev=468.81 00:27:04.952 clat percentiles (usec): 00:27:04.952 | 1.00th=[ 2147], 5.00th=[ 3621], 10.00th=[ 3884], 20.00th=[ 3982], 00:27:04.952 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:27:04.952 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4178], 95.00th=[ 4293], 00:27:04.952 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7635], 00:27:04.952 | 99.99th=[ 8586] 00:27:04.952 bw ( KiB/s): min=15104, max=15968, per=25.05%, avg=15559.90, stdev=230.93, samples=10 00:27:04.952 iops : min= 1888, max= 1996, avg=1944.90, stdev=28.88, samples=10 00:27:04.952 lat (usec) : 1000=0.12% 00:27:04.952 lat (msec) : 2=0.77%, 4=27.23%, 10=71.88% 00:27:04.952 cpu : usr=94.52%, sys=4.92%, ctx=7, majf=0, minf=84 00:27:04.952 IO depths : 1=0.5%, 2=22.7%, 4=51.6%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.952 issued rwts: total=9730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.953 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:04.953 00:27:04.953 Run status group 0 (all jobs): 00:27:04.953 READ: bw=60.7MiB/s (63.6MB/s), 15.1MiB/s-15.2MiB/s (15.8MB/s-16.0MB/s), io=303MiB (318MB), run=5001-5003msec 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 00:27:04.953 real 0m24.185s 00:27:04.953 user 4m32.355s 00:27:04.953 sys 0m7.294s 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 ************************************ 00:27:04.953 END TEST fio_dif_rand_params 00:27:04.953 ************************************ 00:27:04.953 12:03:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:04.953 12:03:54 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:04.953 12:03:54 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 ************************************ 00:27:04.953 START TEST fio_dif_digest 00:27:04.953 ************************************ 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 bdev_null0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.953 [2024-07-12 12:03:54.333575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.953 { 00:27:04.953 "params": { 00:27:04.953 "name": "Nvme$subsystem", 00:27:04.953 "trtype": "$TEST_TRANSPORT", 00:27:04.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.953 "adrfam": "ipv4", 00:27:04.953 "trsvcid": "$NVMF_PORT", 00:27:04.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.953 "hdgst": ${hdgst:-false}, 00:27:04.953 "ddgst": ${ddgst:-false} 00:27:04.953 }, 00:27:04.953 "method": "bdev_nvme_attach_controller" 00:27:04.953 } 00:27:04.953 EOF 00:27:04.953 )") 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.953 "params": { 00:27:04.953 "name": "Nvme0", 00:27:04.953 "trtype": "tcp", 00:27:04.953 "traddr": "10.0.0.2", 00:27:04.953 "adrfam": "ipv4", 00:27:04.953 "trsvcid": "4420", 00:27:04.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.953 "hdgst": true, 00:27:04.953 "ddgst": true 00:27:04.953 }, 00:27:04.953 "method": "bdev_nvme_attach_controller" 00:27:04.953 }' 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:04.953 12:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.212 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:05.212 ... 00:27:05.212 fio-3.35 00:27:05.212 Starting 3 threads 00:27:05.212 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.406 00:27:17.406 filename0: (groupid=0, jobs=1): err= 0: pid=1047744: Fri Jul 12 12:04:05 2024 00:27:17.406 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10048msec) 00:27:17.406 slat (nsec): min=6135, max=54439, avg=13920.49, stdev=3686.93 00:27:17.406 clat (usec): min=11977, max=55895, avg=16217.86, stdev=1670.79 00:27:17.406 lat (usec): min=11991, max=55907, avg=16231.78, stdev=1670.88 00:27:17.406 clat percentiles (usec): 00:27:17.406 | 1.00th=[13698], 5.00th=[14353], 10.00th=[14746], 20.00th=[15270], 00:27:17.406 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:27:17.406 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:27:17.406 | 99.00th=[19006], 99.50th=[19792], 99.90th=[51643], 99.95th=[55837], 00:27:17.406 | 99.99th=[55837] 00:27:17.406 bw ( KiB/s): min=23040, max=25088, per=31.83%, avg=23692.80, stdev=547.64, samples=20 00:27:17.406 iops : min= 180, max= 196, avg=185.10, stdev= 4.28, samples=20 00:27:17.406 lat (msec) : 20=99.78%, 50=0.11%, 100=0.11% 00:27:17.406 cpu : usr=92.30%, sys=7.20%, ctx=21, majf=0, minf=128 00:27:17.406 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 issued rwts: total=1854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.407 filename0: (groupid=0, jobs=1): err= 0: pid=1047745: Fri Jul 12 12:04:05 2024 00:27:17.407 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10049msec) 00:27:17.407 slat (nsec): min=5862, max=42470, avg=13701.68, stdev=3389.94 00:27:17.407 clat (usec): min=11177, max=52887, avg=14841.99, stdev=1588.96 00:27:17.407 lat (usec): min=11189, max=52900, avg=14855.69, stdev=1588.93 00:27:17.407 clat percentiles (usec): 00:27:17.407 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:27:17.407 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:27:17.407 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:27:17.407 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21627], 99.95th=[49021], 00:27:17.407 | 99.99th=[52691] 00:27:17.407 bw ( KiB/s): min=24320, max=27136, per=34.79%, avg=25894.40, stdev=644.83, samples=20 00:27:17.407 iops : min= 190, max= 212, avg=202.30, stdev= 5.04, samples=20 00:27:17.407 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:27:17.407 cpu : usr=91.25%, sys=8.29%, ctx=25, majf=0, minf=132 00:27:17.407 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.407 filename0: (groupid=0, jobs=1): err= 0: pid=1047746: Fri Jul 12 12:04:05 2024 00:27:17.407 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10049msec) 00:27:17.407 slat (nsec): min=5711, max=38812, avg=13495.39, stdev=3225.12 00:27:17.407 clat (usec): min=11097, max=52237, avg=15319.42, stdev=1551.09 00:27:17.407 lat (usec): min=11116, max=52251, avg=15332.92, stdev=1551.11 00:27:17.407 clat percentiles (usec): 00:27:17.407 | 1.00th=[12780], 5.00th=[13566], 10.00th=[14091], 20.00th=[14484], 00:27:17.407 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:27:17.407 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[16909], 00:27:17.407 | 99.00th=[17957], 99.50th=[18482], 99.90th=[49021], 99.95th=[52167], 00:27:17.407 | 99.99th=[52167] 00:27:17.407 bw ( KiB/s): min=24368, max=26880, per=33.71%, avg=25090.40, stdev=612.91, samples=20 00:27:17.407 iops : min= 190, max= 210, avg=196.00, stdev= 4.81, samples=20 00:27:17.407 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:27:17.407 cpu : usr=91.75%, sys=7.74%, ctx=25, majf=0, minf=117 00:27:17.407 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.407 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.407 00:27:17.407 Run status group 0 (all jobs): 00:27:17.407 READ: bw=72.7MiB/s (76.2MB/s), 23.1MiB/s-25.2MiB/s (24.2MB/s-26.4MB/s), io=730MiB (766MB), run=10048-10049msec 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.407 00:27:17.407 real 0m11.184s 00:27:17.407 user 0m28.746s 00:27:17.407 sys 0m2.625s 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:17.407 12:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:17.407 ************************************ 00:27:17.407 END TEST fio_dif_digest 00:27:17.407 ************************************ 00:27:17.407 12:04:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:17.407 12:04:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.407 rmmod nvme_tcp 00:27:17.407 rmmod nvme_fabrics 00:27:17.407 rmmod nvme_keyring 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1041180 ']' 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1041180 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1041180 ']' 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1041180 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1041180 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1041180' 00:27:17.407 killing process with pid 1041180 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1041180 00:27:17.407 12:04:05 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1041180 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:17.407 12:04:05 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:17.407 Waiting for block devices as requested 00:27:17.673 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:17.673 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:17.673 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.997 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:17.997 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:17.997 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:17.997 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:17.997 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:18.257 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:18.257 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:18.257 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:18.257 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:18.516 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:18.516 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:18.516 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:18.775 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:18.775 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:18.775 12:04:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.775 12:04:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.775 12:04:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.775 12:04:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.775 12:04:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.775 12:04:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:18.775 12:04:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.311 12:04:10 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.311 00:27:21.311 real 1m6.766s 00:27:21.311 user 6m28.597s 00:27:21.311 sys 0m19.355s 00:27:21.311 12:04:10 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:21.311 12:04:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:21.311 ************************************ 00:27:21.311 END TEST nvmf_dif 00:27:21.311 ************************************ 00:27:21.311 12:04:10 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:21.311 12:04:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:21.311 12:04:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:21.311 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:27:21.311 ************************************ 00:27:21.311 START TEST nvmf_abort_qd_sizes 00:27:21.311 ************************************ 00:27:21.311 12:04:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:21.311 * Looking for test storage... 00:27:21.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:21.311 12:04:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.311 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:21.311 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.312 12:04:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.214 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:23.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:23.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:23.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:23.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:27:23.215 00:27:23.215 --- 10.0.0.2 ping statistics --- 00:27:23.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.215 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:27:23.215 00:27:23.215 --- 10.0.0.1 ping statistics --- 00:27:23.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.215 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:23.215 12:04:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.147 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:24.147 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:24.147 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:25.084 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1052526 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1052526 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1052526 ']' 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:25.343 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:25.343 [2024-07-12 12:04:14.669938] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:27:25.343 [2024-07-12 12:04:14.670016] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.343 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.343 [2024-07-12 12:04:14.734624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.601 [2024-07-12 12:04:14.852542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.601 [2024-07-12 12:04:14.852592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.601 [2024-07-12 12:04:14.852605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.601 [2024-07-12 12:04:14.852616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.601 [2024-07-12 12:04:14.852625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.601 [2024-07-12 12:04:14.852804] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.601 [2024-07-12 12:04:14.853886] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.602 [2024-07-12 12:04:14.853958] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.602 [2024-07-12 12:04:14.853962] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.602 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:25.602 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:27:25.602 12:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.602 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:25.602 12:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:25.602 12:04:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:25.602 ************************************ 00:27:25.602 START TEST spdk_target_abort 00:27:25.602 ************************************ 00:27:25.602 12:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:27:25.602 12:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:25.602 12:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:25.602 12:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.602 12:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.914 spdk_targetn1 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.914 [2024-07-12 12:04:17.892007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.914 [2024-07-12 12:04:17.924224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:28.914 12:04:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:28.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.197 Initializing NVMe Controllers 00:27:32.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:32.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:32.197 Initialization complete. Launching workers. 00:27:32.197 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12438, failed: 0 00:27:32.197 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 11219 00:27:32.197 success 683, unsuccess 536, failed 0 00:27:32.197 12:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:32.197 12:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.197 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.476 Initializing NVMe Controllers 00:27:35.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:35.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:35.476 Initialization complete. Launching workers. 00:27:35.476 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8481, failed: 0 00:27:35.476 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1249, failed to submit 7232 00:27:35.476 success 352, unsuccess 897, failed 0 00:27:35.476 12:04:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:35.476 12:04:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:35.476 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.764 Initializing NVMe Controllers 00:27:38.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:38.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:38.764 Initialization complete. Launching workers. 00:27:38.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31122, failed: 0 00:27:38.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2625, failed to submit 28497 00:27:38.764 success 520, unsuccess 2105, failed 0 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.764 12:04:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1052526 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1052526 ']' 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1052526 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1052526 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1052526' 00:27:40.138 killing process with pid 1052526 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1052526 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1052526 00:27:40.138 00:27:40.138 real 0m14.467s 00:27:40.138 user 0m54.946s 00:27:40.138 sys 0m2.483s 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 ************************************ 00:27:40.138 END TEST spdk_target_abort 00:27:40.138 ************************************ 00:27:40.138 12:04:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:40.138 12:04:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:40.138 12:04:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:40.138 12:04:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 ************************************ 00:27:40.138 START TEST kernel_target_abort 00:27:40.138 ************************************ 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:40.138 12:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:41.554 Waiting for block devices as requested 00:27:41.554 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:41.554 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:41.554 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:41.554 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:41.815 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:41.815 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:41.815 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:41.815 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:42.080 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:42.080 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:42.080 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:42.080 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:42.339 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:42.339 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:42.339 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:42.339 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:42.598 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:42.598 12:04:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:42.598 No valid GPT data, bailing 00:27:42.598 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:42.598 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:42.598 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:42.598 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:42.598 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:42.599 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:42.858 00:27:42.858 Discovery Log Number of Records 2, Generation counter 2 00:27:42.858 =====Discovery Log Entry 0====== 00:27:42.858 trtype: tcp 00:27:42.858 adrfam: ipv4 00:27:42.858 subtype: current discovery subsystem 00:27:42.858 treq: not specified, sq flow control disable supported 00:27:42.858 portid: 1 00:27:42.858 trsvcid: 4420 00:27:42.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:42.858 traddr: 10.0.0.1 00:27:42.858 eflags: none 00:27:42.858 sectype: none 00:27:42.858 =====Discovery Log Entry 1====== 00:27:42.858 trtype: tcp 00:27:42.858 adrfam: ipv4 00:27:42.858 subtype: nvme subsystem 00:27:42.858 treq: not specified, sq flow control disable supported 00:27:42.858 portid: 1 00:27:42.858 trsvcid: 4420 00:27:42.858 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:42.858 traddr: 10.0.0.1 00:27:42.858 eflags: none 00:27:42.858 sectype: none 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:42.858 12:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:42.858 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.147 Initializing NVMe Controllers 00:27:46.147 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:46.147 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:46.147 Initialization complete. Launching workers. 00:27:46.147 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44883, failed: 0 00:27:46.147 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44883, failed to submit 0 00:27:46.147 success 0, unsuccess 44883, failed 0 00:27:46.147 12:04:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:46.147 12:04:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:46.147 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.441 Initializing NVMe Controllers 00:27:49.441 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:49.441 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:49.441 Initialization complete. Launching workers. 00:27:49.441 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83402, failed: 0 00:27:49.441 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21034, failed to submit 62368 00:27:49.441 success 0, unsuccess 21034, failed 0 00:27:49.441 12:04:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:49.441 12:04:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.441 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.727 Initializing NVMe Controllers 00:27:52.727 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:52.727 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:52.727 Initialization complete. Launching workers. 00:27:52.727 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78940, failed: 0 00:27:52.727 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19706, failed to submit 59234 00:27:52.727 success 0, unsuccess 19706, failed 0 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:52.727 12:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.293 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:53.293 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:53.293 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:53.293 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:53.293 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:53.293 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:53.294 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:53.294 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:53.294 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:53.553 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:54.495 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:54.495 00:27:54.495 real 0m14.308s 00:27:54.495 user 0m6.484s 00:27:54.495 sys 0m3.144s 00:27:54.495 12:04:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:54.495 12:04:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.495 ************************************ 00:27:54.495 END TEST kernel_target_abort 00:27:54.495 ************************************ 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.495 rmmod nvme_tcp 00:27:54.495 rmmod nvme_fabrics 00:27:54.495 rmmod nvme_keyring 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1052526 ']' 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1052526 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1052526 ']' 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1052526 00:27:54.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1052526) - No such process 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1052526 is not found' 00:27:54.495 Process with pid 1052526 is not found 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:54.495 12:04:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:55.428 Waiting for block devices as requested 00:27:55.687 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:55.687 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:55.948 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:55.948 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:55.948 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:55.948 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:56.206 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:56.206 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:56.206 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:56.206 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:56.465 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:56.465 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:56.465 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:56.465 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:56.724 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:56.724 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:56.724 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:56.982 12:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.886 12:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.886 00:27:58.886 real 0m38.019s 00:27:58.886 user 1m3.428s 00:27:58.886 sys 0m8.883s 00:27:58.886 12:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:58.886 12:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:58.886 ************************************ 00:27:58.886 END TEST nvmf_abort_qd_sizes 00:27:58.886 ************************************ 00:27:58.886 12:04:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:58.886 12:04:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:58.886 12:04:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:58.886 12:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.886 ************************************ 00:27:58.886 START TEST keyring_file 00:27:58.886 ************************************ 00:27:58.886 12:04:48 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:59.145 * Looking for test storage... 00:27:59.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:59.145 12:04:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:59.145 12:04:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.145 12:04:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.145 12:04:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.145 12:04:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.145 12:04:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.145 12:04:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.145 12:04:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.146 12:04:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.146 12:04:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:59.146 12:04:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tSYY1zeHWF 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tSYY1zeHWF 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tSYY1zeHWF 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tSYY1zeHWF 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BGXBp5TcDj 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:59.146 12:04:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BGXBp5TcDj 00:27:59.146 12:04:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BGXBp5TcDj 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BGXBp5TcDj 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=1058299 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:59.146 12:04:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1058299 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1058299 ']' 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:59.146 12:04:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.146 [2024-07-12 12:04:48.563536] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:27:59.146 [2024-07-12 12:04:48.563635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058299 ] 00:27:59.146 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.146 [2024-07-12 12:04:48.623740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.406 [2024-07-12 12:04:48.744733] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:27:59.665 12:04:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.665 [2024-07-12 12:04:49.030467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.665 null0 00:27:59.665 [2024-07-12 12:04:49.062504] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:59.665 [2024-07-12 12:04:49.062990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.665 [2024-07-12 12:04:49.070526] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.665 12:04:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.665 [2024-07-12 12:04:49.082553] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:59.665 request: 00:27:59.665 { 00:27:59.665 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.665 "secure_channel": false, 00:27:59.665 "listen_address": { 00:27:59.665 "trtype": "tcp", 00:27:59.665 "traddr": "127.0.0.1", 00:27:59.665 "trsvcid": "4420" 00:27:59.665 }, 00:27:59.665 "method": "nvmf_subsystem_add_listener", 00:27:59.665 "req_id": 1 00:27:59.665 } 00:27:59.665 Got JSON-RPC error response 00:27:59.665 response: 00:27:59.665 { 00:27:59.665 "code": -32602, 00:27:59.665 "message": "Invalid parameters" 00:27:59.665 } 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:59.665 12:04:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=1058303 00:27:59.665 12:04:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1058303 /var/tmp/bperf.sock 00:27:59.665 12:04:49 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1058303 ']' 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:59.665 12:04:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.665 [2024-07-12 12:04:49.131796] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:27:59.665 [2024-07-12 12:04:49.131897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058303 ] 00:27:59.924 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.924 [2024-07-12 12:04:49.194965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.924 [2024-07-12 12:04:49.309983] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.184 12:04:49 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:00.184 12:04:49 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:28:00.184 12:04:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:00.184 12:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:00.442 12:04:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BGXBp5TcDj 00:28:00.442 12:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BGXBp5TcDj 00:28:00.442 12:04:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:00.442 12:04:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:00.442 12:04:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.442 12:04:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.442 12:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.699 12:04:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tSYY1zeHWF == \/\t\m\p\/\t\m\p\.\t\S\Y\Y\1\z\e\H\W\F ]] 00:28:00.699 12:04:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:00.699 12:04:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:00.699 12:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.699 12:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:00.699 12:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.957 12:04:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BGXBp5TcDj == \/\t\m\p\/\t\m\p\.\B\G\X\B\p\5\T\c\D\j ]] 00:28:00.957 12:04:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:00.957 12:04:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:00.957 12:04:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:00.957 12:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.957 12:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.957 12:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:01.215 12:04:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:01.215 12:04:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:01.215 12:04:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:01.215 12:04:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.215 12:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.215 12:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:01.215 12:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.473 12:04:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:01.473 12:04:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.473 12:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.730 [2024-07-12 12:04:51.176096] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:02.015 nvme0n1 00:28:02.015 12:04:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:02.015 12:04:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:02.015 12:04:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.015 12:04:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.015 12:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.015 12:04:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:02.273 12:04:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:02.273 12:04:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:02.273 12:04:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:02.273 12:04:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.273 12:04:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.273 12:04:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:02.273 12:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.533 12:04:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:02.533 12:04:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.533 Running I/O for 1 seconds... 00:28:03.469 00:28:03.469 Latency(us) 00:28:03.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.469 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:03.469 nvme0n1 : 1.01 8337.64 32.57 0.00 0.00 15266.12 4563.25 21942.42 00:28:03.469 =================================================================================================================== 00:28:03.469 Total : 8337.64 32.57 0.00 0.00 15266.12 4563.25 21942.42 00:28:03.469 0 00:28:03.469 12:04:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:03.469 12:04:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:03.725 12:04:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:03.725 12:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:03.725 12:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:03.725 12:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.725 12:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.725 12:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:03.982 12:04:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:03.982 12:04:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:03.982 12:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:03.982 12:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:03.982 12:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.982 12:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.982 12:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.240 12:04:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:04.240 12:04:53 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.240 12:04:53 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.240 12:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.497 [2024-07-12 12:04:53.871957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:04.497 [2024-07-12 12:04:53.872748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173abf0 (107): Transport endpoint is not connected 00:28:04.497 [2024-07-12 12:04:53.873737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173abf0 (9): Bad file descriptor 00:28:04.497 [2024-07-12 12:04:53.874735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:04.497 [2024-07-12 12:04:53.874760] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:04.497 [2024-07-12 12:04:53.874776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:04.497 request: 00:28:04.497 { 00:28:04.497 "name": "nvme0", 00:28:04.497 "trtype": "tcp", 00:28:04.497 "traddr": "127.0.0.1", 00:28:04.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.497 "adrfam": "ipv4", 00:28:04.497 "trsvcid": "4420", 00:28:04.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.497 "psk": "key1", 00:28:04.497 "method": "bdev_nvme_attach_controller", 00:28:04.497 "req_id": 1 00:28:04.497 } 00:28:04.497 Got JSON-RPC error response 00:28:04.497 response: 00:28:04.497 { 00:28:04.497 "code": -5, 00:28:04.497 "message": "Input/output error" 00:28:04.497 } 00:28:04.497 12:04:53 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:04.497 12:04:53 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:04.497 12:04:53 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:04.497 12:04:53 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:04.497 12:04:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:04.497 12:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:04.497 12:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.498 12:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.498 12:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.498 12:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.755 12:04:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:04.755 12:04:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:04.755 12:04:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.755 12:04:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.755 12:04:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.755 12:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.755 12:04:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:05.013 12:04:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:05.013 12:04:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:05.013 12:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:05.270 12:04:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:05.270 12:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:05.528 12:04:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:05.528 12:04:54 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:05.528 12:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.786 12:04:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:05.786 12:04:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tSYY1zeHWF 00:28:05.786 12:04:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.786 12:04:55 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:05.786 12:04:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:06.043 [2024-07-12 12:04:55.405854] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tSYY1zeHWF': 0100660 00:28:06.043 [2024-07-12 12:04:55.405922] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:06.043 request: 00:28:06.043 { 00:28:06.043 "name": "key0", 00:28:06.043 "path": "/tmp/tmp.tSYY1zeHWF", 00:28:06.043 "method": "keyring_file_add_key", 00:28:06.043 "req_id": 1 00:28:06.043 } 00:28:06.043 Got JSON-RPC error response 00:28:06.044 response: 00:28:06.044 { 00:28:06.044 "code": -1, 00:28:06.044 "message": "Operation not permitted" 00:28:06.044 } 00:28:06.044 12:04:55 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:06.044 12:04:55 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:06.044 12:04:55 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:06.044 12:04:55 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:06.044 12:04:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tSYY1zeHWF 00:28:06.044 12:04:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:06.044 12:04:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tSYY1zeHWF 00:28:06.302 12:04:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tSYY1zeHWF 00:28:06.302 12:04:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:06.302 12:04:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:06.302 12:04:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.302 12:04:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.302 12:04:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.302 12:04:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:06.559 12:04:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:06.559 12:04:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:06.559 12:04:55 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.559 12:04:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.817 [2024-07-12 12:04:56.143843] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tSYY1zeHWF': No such file or directory 00:28:06.817 [2024-07-12 12:04:56.143890] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:06.817 [2024-07-12 12:04:56.143943] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:06.817 [2024-07-12 12:04:56.143956] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:06.817 [2024-07-12 12:04:56.143968] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:06.817 request: 00:28:06.817 { 00:28:06.817 "name": "nvme0", 00:28:06.817 "trtype": "tcp", 00:28:06.817 "traddr": "127.0.0.1", 00:28:06.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.817 "adrfam": "ipv4", 00:28:06.817 "trsvcid": "4420", 00:28:06.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.817 "psk": "key0", 00:28:06.817 "method": "bdev_nvme_attach_controller", 00:28:06.817 "req_id": 1 00:28:06.818 } 00:28:06.818 Got JSON-RPC error response 00:28:06.818 response: 00:28:06.818 { 00:28:06.818 "code": -19, 00:28:06.818 "message": "No such device" 00:28:06.818 } 00:28:06.818 12:04:56 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:06.818 12:04:56 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:06.818 12:04:56 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:06.818 12:04:56 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:06.818 12:04:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:06.818 12:04:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:07.076 12:04:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YbcP7G50XL 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:07.076 12:04:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YbcP7G50XL 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YbcP7G50XL 00:28:07.076 12:04:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YbcP7G50XL 00:28:07.076 12:04:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbcP7G50XL 00:28:07.076 12:04:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YbcP7G50XL 00:28:07.334 12:04:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.334 12:04:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.591 nvme0n1 00:28:07.591 12:04:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:07.591 12:04:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:07.591 12:04:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:07.591 12:04:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.591 12:04:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.591 12:04:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:07.849 12:04:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:07.849 12:04:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:07.849 12:04:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:08.107 12:04:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:08.107 12:04:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:08.107 12:04:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.107 12:04:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.107 12:04:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.365 12:04:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:08.365 12:04:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:08.365 12:04:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.365 12:04:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.365 12:04:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.365 12:04:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.365 12:04:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.623 12:04:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:08.623 12:04:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:08.623 12:04:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:08.881 12:04:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:08.881 12:04:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:08.881 12:04:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.139 12:04:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:09.139 12:04:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbcP7G50XL 00:28:09.139 12:04:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YbcP7G50XL 00:28:09.397 12:04:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BGXBp5TcDj 00:28:09.397 12:04:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BGXBp5TcDj 00:28:09.655 12:04:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.655 12:04:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.912 nvme0n1 00:28:09.912 12:04:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:09.912 12:04:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:10.170 12:04:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:10.170 "subsystems": [ 00:28:10.170 { 00:28:10.170 "subsystem": "keyring", 00:28:10.170 "config": [ 00:28:10.170 { 00:28:10.170 "method": "keyring_file_add_key", 00:28:10.170 "params": { 00:28:10.170 "name": "key0", 00:28:10.170 "path": "/tmp/tmp.YbcP7G50XL" 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "keyring_file_add_key", 00:28:10.170 "params": { 00:28:10.170 "name": "key1", 00:28:10.170 "path": "/tmp/tmp.BGXBp5TcDj" 00:28:10.170 } 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "iobuf", 00:28:10.170 "config": [ 00:28:10.170 { 00:28:10.170 "method": "iobuf_set_options", 00:28:10.170 "params": { 00:28:10.170 "small_pool_count": 8192, 00:28:10.170 "large_pool_count": 1024, 00:28:10.170 "small_bufsize": 8192, 00:28:10.170 "large_bufsize": 135168 00:28:10.170 } 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "sock", 00:28:10.170 "config": [ 00:28:10.170 { 00:28:10.170 "method": "sock_set_default_impl", 00:28:10.170 "params": { 00:28:10.170 "impl_name": "posix" 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "sock_impl_set_options", 00:28:10.170 "params": { 00:28:10.170 "impl_name": "ssl", 00:28:10.170 "recv_buf_size": 4096, 00:28:10.170 "send_buf_size": 4096, 00:28:10.170 "enable_recv_pipe": true, 00:28:10.170 "enable_quickack": false, 00:28:10.170 "enable_placement_id": 0, 00:28:10.170 "enable_zerocopy_send_server": true, 00:28:10.170 "enable_zerocopy_send_client": false, 00:28:10.170 "zerocopy_threshold": 0, 00:28:10.170 "tls_version": 0, 00:28:10.170 "enable_ktls": false 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "sock_impl_set_options", 00:28:10.170 "params": { 00:28:10.170 "impl_name": "posix", 00:28:10.170 "recv_buf_size": 2097152, 00:28:10.170 "send_buf_size": 2097152, 00:28:10.170 "enable_recv_pipe": true, 00:28:10.170 "enable_quickack": false, 00:28:10.170 "enable_placement_id": 0, 00:28:10.170 "enable_zerocopy_send_server": true, 00:28:10.170 "enable_zerocopy_send_client": false, 00:28:10.170 "zerocopy_threshold": 0, 00:28:10.170 "tls_version": 0, 00:28:10.170 "enable_ktls": false 00:28:10.170 } 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "vmd", 00:28:10.170 "config": [] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "accel", 00:28:10.170 "config": [ 00:28:10.170 { 00:28:10.170 "method": "accel_set_options", 00:28:10.170 "params": { 00:28:10.170 "small_cache_size": 128, 00:28:10.170 "large_cache_size": 16, 00:28:10.170 "task_count": 2048, 00:28:10.170 "sequence_count": 2048, 00:28:10.170 "buf_count": 2048 00:28:10.170 } 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "bdev", 00:28:10.170 "config": [ 00:28:10.170 { 00:28:10.170 "method": "bdev_set_options", 00:28:10.170 "params": { 00:28:10.170 "bdev_io_pool_size": 65535, 00:28:10.170 "bdev_io_cache_size": 256, 00:28:10.170 "bdev_auto_examine": true, 00:28:10.170 "iobuf_small_cache_size": 128, 00:28:10.170 "iobuf_large_cache_size": 16 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_raid_set_options", 00:28:10.170 "params": { 00:28:10.170 "process_window_size_kb": 1024 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_iscsi_set_options", 00:28:10.170 "params": { 00:28:10.170 "timeout_sec": 30 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_nvme_set_options", 00:28:10.170 "params": { 00:28:10.170 "action_on_timeout": "none", 00:28:10.170 "timeout_us": 0, 00:28:10.170 "timeout_admin_us": 0, 00:28:10.170 "keep_alive_timeout_ms": 10000, 00:28:10.170 "arbitration_burst": 0, 00:28:10.170 "low_priority_weight": 0, 00:28:10.170 "medium_priority_weight": 0, 00:28:10.170 "high_priority_weight": 0, 00:28:10.170 "nvme_adminq_poll_period_us": 10000, 00:28:10.170 "nvme_ioq_poll_period_us": 0, 00:28:10.170 "io_queue_requests": 512, 00:28:10.170 "delay_cmd_submit": true, 00:28:10.170 "transport_retry_count": 4, 00:28:10.170 "bdev_retry_count": 3, 00:28:10.170 "transport_ack_timeout": 0, 00:28:10.170 "ctrlr_loss_timeout_sec": 0, 00:28:10.170 "reconnect_delay_sec": 0, 00:28:10.170 "fast_io_fail_timeout_sec": 0, 00:28:10.170 "disable_auto_failback": false, 00:28:10.170 "generate_uuids": false, 00:28:10.170 "transport_tos": 0, 00:28:10.170 "nvme_error_stat": false, 00:28:10.170 "rdma_srq_size": 0, 00:28:10.170 "io_path_stat": false, 00:28:10.170 "allow_accel_sequence": false, 00:28:10.170 "rdma_max_cq_size": 0, 00:28:10.170 "rdma_cm_event_timeout_ms": 0, 00:28:10.170 "dhchap_digests": [ 00:28:10.170 "sha256", 00:28:10.170 "sha384", 00:28:10.170 "sha512" 00:28:10.170 ], 00:28:10.170 "dhchap_dhgroups": [ 00:28:10.170 "null", 00:28:10.170 "ffdhe2048", 00:28:10.170 "ffdhe3072", 00:28:10.170 "ffdhe4096", 00:28:10.170 "ffdhe6144", 00:28:10.170 "ffdhe8192" 00:28:10.170 ] 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_nvme_attach_controller", 00:28:10.170 "params": { 00:28:10.170 "name": "nvme0", 00:28:10.170 "trtype": "TCP", 00:28:10.170 "adrfam": "IPv4", 00:28:10.170 "traddr": "127.0.0.1", 00:28:10.170 "trsvcid": "4420", 00:28:10.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.170 "prchk_reftag": false, 00:28:10.170 "prchk_guard": false, 00:28:10.170 "ctrlr_loss_timeout_sec": 0, 00:28:10.170 "reconnect_delay_sec": 0, 00:28:10.170 "fast_io_fail_timeout_sec": 0, 00:28:10.170 "psk": "key0", 00:28:10.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.170 "hdgst": false, 00:28:10.170 "ddgst": false 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_nvme_set_hotplug", 00:28:10.170 "params": { 00:28:10.170 "period_us": 100000, 00:28:10.170 "enable": false 00:28:10.170 } 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "method": "bdev_wait_for_examine" 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }, 00:28:10.170 { 00:28:10.170 "subsystem": "nbd", 00:28:10.170 "config": [] 00:28:10.170 } 00:28:10.170 ] 00:28:10.170 }' 00:28:10.170 12:04:59 keyring_file -- keyring/file.sh@114 -- # killprocess 1058303 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1058303 ']' 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1058303 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@954 -- # uname 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1058303 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1058303' 00:28:10.170 killing process with pid 1058303 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@968 -- # kill 1058303 00:28:10.170 Received shutdown signal, test time was about 1.000000 seconds 00:28:10.170 00:28:10.170 Latency(us) 00:28:10.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.170 =================================================================================================================== 00:28:10.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.170 12:04:59 keyring_file -- common/autotest_common.sh@973 -- # wait 1058303 00:28:10.739 12:04:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=1059766 00:28:10.739 12:04:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1059766 /var/tmp/bperf.sock 00:28:10.739 12:04:59 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1059766 ']' 00:28:10.739 12:04:59 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.739 12:04:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:10.739 12:04:59 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:10.739 12:04:59 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.739 12:04:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:10.739 "subsystems": [ 00:28:10.739 { 00:28:10.739 "subsystem": "keyring", 00:28:10.739 "config": [ 00:28:10.739 { 00:28:10.739 "method": "keyring_file_add_key", 00:28:10.739 "params": { 00:28:10.739 "name": "key0", 00:28:10.739 "path": "/tmp/tmp.YbcP7G50XL" 00:28:10.739 } 00:28:10.739 }, 00:28:10.739 { 00:28:10.739 "method": "keyring_file_add_key", 00:28:10.739 "params": { 00:28:10.739 "name": "key1", 00:28:10.739 "path": "/tmp/tmp.BGXBp5TcDj" 00:28:10.739 } 00:28:10.739 } 00:28:10.739 ] 00:28:10.739 }, 00:28:10.739 { 00:28:10.739 "subsystem": "iobuf", 00:28:10.739 "config": [ 00:28:10.739 { 00:28:10.739 "method": "iobuf_set_options", 00:28:10.739 "params": { 00:28:10.739 "small_pool_count": 8192, 00:28:10.739 "large_pool_count": 1024, 00:28:10.739 "small_bufsize": 8192, 00:28:10.739 "large_bufsize": 135168 00:28:10.739 } 00:28:10.739 } 00:28:10.739 ] 00:28:10.739 }, 00:28:10.739 { 00:28:10.739 "subsystem": "sock", 00:28:10.739 "config": [ 00:28:10.739 { 00:28:10.739 "method": "sock_set_default_impl", 00:28:10.739 "params": { 00:28:10.739 "impl_name": "posix" 00:28:10.739 } 00:28:10.739 }, 00:28:10.739 { 00:28:10.740 "method": "sock_impl_set_options", 00:28:10.740 "params": { 00:28:10.740 "impl_name": "ssl", 00:28:10.740 "recv_buf_size": 4096, 00:28:10.740 "send_buf_size": 4096, 00:28:10.740 "enable_recv_pipe": true, 00:28:10.740 "enable_quickack": false, 00:28:10.740 "enable_placement_id": 0, 00:28:10.740 "enable_zerocopy_send_server": true, 00:28:10.740 "enable_zerocopy_send_client": false, 00:28:10.740 "zerocopy_threshold": 0, 00:28:10.740 "tls_version": 0, 00:28:10.740 "enable_ktls": false 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "sock_impl_set_options", 00:28:10.740 "params": { 00:28:10.740 "impl_name": "posix", 00:28:10.740 "recv_buf_size": 2097152, 00:28:10.740 "send_buf_size": 2097152, 00:28:10.740 "enable_recv_pipe": true, 00:28:10.740 "enable_quickack": false, 00:28:10.740 "enable_placement_id": 0, 00:28:10.740 "enable_zerocopy_send_server": true, 00:28:10.740 "enable_zerocopy_send_client": false, 00:28:10.740 "zerocopy_threshold": 0, 00:28:10.740 "tls_version": 0, 00:28:10.740 "enable_ktls": false 00:28:10.740 } 00:28:10.740 } 00:28:10.740 ] 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "subsystem": "vmd", 00:28:10.740 "config": [] 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "subsystem": "accel", 00:28:10.740 "config": [ 00:28:10.740 { 00:28:10.740 "method": "accel_set_options", 00:28:10.740 "params": { 00:28:10.740 "small_cache_size": 128, 00:28:10.740 "large_cache_size": 16, 00:28:10.740 "task_count": 2048, 00:28:10.740 "sequence_count": 2048, 00:28:10.740 "buf_count": 2048 00:28:10.740 } 00:28:10.740 } 00:28:10.740 ] 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "subsystem": "bdev", 00:28:10.740 "config": [ 00:28:10.740 { 00:28:10.740 "method": "bdev_set_options", 00:28:10.740 "params": { 00:28:10.740 "bdev_io_pool_size": 65535, 00:28:10.740 "bdev_io_cache_size": 256, 00:28:10.740 "bdev_auto_examine": true, 00:28:10.740 "iobuf_small_cache_size": 128, 00:28:10.740 "iobuf_large_cache_size": 16 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_raid_set_options", 00:28:10.740 "params": { 00:28:10.740 "process_window_size_kb": 1024 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_iscsi_set_options", 00:28:10.740 "params": { 00:28:10.740 "timeout_sec": 30 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_nvme_set_options", 00:28:10.740 "params": { 00:28:10.740 "action_on_timeout": "none", 00:28:10.740 "timeout_us": 0, 00:28:10.740 "timeout_admin_us": 0, 00:28:10.740 "keep_alive_timeout_ms": 10000, 00:28:10.740 "arbitration_burst": 0, 00:28:10.740 "low_priority_weight": 0, 00:28:10.740 "medium_priority_weight": 0, 00:28:10.740 "high_priority_weight": 0, 00:28:10.740 "nvme_adminq_poll_period_us": 10000, 00:28:10.740 "nvme_ioq_poll_period_us": 0, 00:28:10.740 "io_queue_requests": 512, 00:28:10.740 "delay_cmd_submit": true, 00:28:10.740 "transport_retry_count": 4, 00:28:10.740 "bdev_retry_count": 3, 00:28:10.740 "transport_ack_timeout": 0, 00:28:10.740 "ctrlr_loss_timeout_sec": 0, 00:28:10.740 "reconnect_delay_sec": 0, 00:28:10.740 "fast_io_fail_timeout_sec": 0, 00:28:10.740 "disable_auto_failback": false, 00:28:10.740 "generate_uuids": false, 00:28:10.740 "transport_tos": 0, 00:28:10.740 "nvme_error_stat": false, 00:28:10.740 "rdma_srq_size": 0, 00:28:10.740 "io_path_stat": false, 00:28:10.740 "allow_accel_sequence": false, 00:28:10.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.740 "rdma_max_cq_size": 0, 00:28:10.740 "rdma_cm_event_timeout_ms": 0, 00:28:10.740 "dhchap_digests": [ 00:28:10.740 "sha256", 00:28:10.740 "sha384", 00:28:10.740 "sha512" 00:28:10.740 ], 00:28:10.740 "dhchap_dhgroups": [ 00:28:10.740 "null", 00:28:10.740 "ffdhe2048", 00:28:10.740 "ffdhe3072", 00:28:10.740 "ffdhe4096", 00:28:10.740 "ffdhe6144", 00:28:10.740 "ffdhe8192" 00:28:10.740 ] 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_nvme_attach_controller", 00:28:10.740 "params": { 00:28:10.740 "name": "nvme0", 00:28:10.740 "trtype": "TCP", 00:28:10.740 "adrfam": "IPv4", 00:28:10.740 "traddr": "127.0.0.1", 00:28:10.740 "trsvcid": "4420", 00:28:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.740 "prchk_reftag": false, 00:28:10.740 "prchk_guard": false, 00:28:10.740 "ctrlr_loss_timeout_sec": 0, 00:28:10.740 "reconnect_delay_sec": 0, 00:28:10.740 "fast_io_fail_timeout_sec": 0, 00:28:10.740 "psk": "key0", 00:28:10.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.740 "hdgst": false, 00:28:10.740 "ddgst": false 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_nvme_set_hotplug", 00:28:10.740 "params": { 00:28:10.740 "period_us": 100000, 00:28:10.740 "enable": false 00:28:10.740 } 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "method": "bdev_wait_for_examine" 00:28:10.740 } 00:28:10.740 ] 00:28:10.740 }, 00:28:10.740 { 00:28:10.740 "subsystem": "nbd", 00:28:10.740 "config": [] 00:28:10.740 } 00:28:10.740 ] 00:28:10.740 }' 00:28:10.740 12:04:59 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:10.740 12:04:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:10.740 [2024-07-12 12:04:59.967993] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:28:10.740 [2024-07-12 12:04:59.968086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059766 ] 00:28:10.740 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.740 [2024-07-12 12:05:00.031272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.740 [2024-07-12 12:05:00.148483] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.008 [2024-07-12 12:05:00.342653] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:11.572 12:05:00 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:11.572 12:05:00 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:28:11.572 12:05:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:11.572 12:05:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:11.572 12:05:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.829 12:05:01 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:11.830 12:05:01 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:11.830 12:05:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:11.830 12:05:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.830 12:05:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.830 12:05:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.830 12:05:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.087 12:05:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:12.087 12:05:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:12.087 12:05:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:12.087 12:05:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.087 12:05:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.087 12:05:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.087 12:05:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:12.344 12:05:01 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:12.344 12:05:01 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:12.344 12:05:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:12.344 12:05:01 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:12.602 12:05:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:12.602 12:05:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:12.602 12:05:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YbcP7G50XL /tmp/tmp.BGXBp5TcDj 00:28:12.602 12:05:01 keyring_file -- keyring/file.sh@20 -- # killprocess 1059766 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1059766 ']' 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1059766 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@954 -- # uname 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1059766 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1059766' 00:28:12.602 killing process with pid 1059766 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@968 -- # kill 1059766 00:28:12.602 Received shutdown signal, test time was about 1.000000 seconds 00:28:12.602 00:28:12.602 Latency(us) 00:28:12.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.602 =================================================================================================================== 00:28:12.602 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:12.602 12:05:01 keyring_file -- common/autotest_common.sh@973 -- # wait 1059766 00:28:12.861 12:05:02 keyring_file -- keyring/file.sh@21 -- # killprocess 1058299 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1058299 ']' 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1058299 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@954 -- # uname 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1058299 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1058299' 00:28:12.861 killing process with pid 1058299 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@968 -- # kill 1058299 00:28:12.861 [2024-07-12 12:05:02.230614] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:12.861 12:05:02 keyring_file -- common/autotest_common.sh@973 -- # wait 1058299 00:28:13.428 00:28:13.428 real 0m14.336s 00:28:13.428 user 0m35.671s 00:28:13.428 sys 0m3.333s 00:28:13.428 12:05:02 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:13.428 12:05:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:13.428 ************************************ 00:28:13.428 END TEST keyring_file 00:28:13.428 ************************************ 00:28:13.428 12:05:02 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:13.428 12:05:02 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:13.428 12:05:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:13.428 12:05:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:13.428 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:28:13.428 ************************************ 00:28:13.428 START TEST keyring_linux 00:28:13.428 ************************************ 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:13.429 * Looking for test storage... 00:28:13.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.429 12:05:02 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.429 12:05:02 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.429 12:05:02 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.429 12:05:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.429 12:05:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.429 12:05:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.429 12:05:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:13.429 12:05:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:13.429 /tmp/:spdk-test:key0 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:13.429 12:05:02 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:13.429 12:05:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:13.429 /tmp/:spdk-test:key1 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1060128 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:13.429 12:05:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1060128 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1060128 ']' 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:13.429 12:05:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:13.701 [2024-07-12 12:05:02.927659] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:28:13.701 [2024-07-12 12:05:02.927754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060128 ] 00:28:13.701 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.701 [2024-07-12 12:05:02.988026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.701 [2024-07-12 12:05:03.099350] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:13.960 [2024-07-12 12:05:03.349712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.960 null0 00:28:13.960 [2024-07-12 12:05:03.381764] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:13.960 [2024-07-12 12:05:03.382248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:13.960 401276608 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:13.960 861543083 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1060257 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:13.960 12:05:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1060257 /var/tmp/bperf.sock 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1060257 ']' 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:13.960 12:05:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:13.960 [2024-07-12 12:05:03.449927] Starting SPDK v24.09-pre git sha1 5e8e6dfc2 / DPDK 24.03.0 initialization... 00:28:13.960 [2024-07-12 12:05:03.450012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060257 ] 00:28:14.219 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.219 [2024-07-12 12:05:03.515160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.219 [2024-07-12 12:05:03.630738] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.156 12:05:04 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:15.156 12:05:04 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:28:15.156 12:05:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:15.156 12:05:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:15.415 12:05:04 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:15.415 12:05:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:15.673 12:05:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:15.673 12:05:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:15.931 [2024-07-12 12:05:05.200556] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:15.931 nvme0n1 00:28:15.931 12:05:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:15.931 12:05:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:15.931 12:05:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:15.931 12:05:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:15.931 12:05:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.931 12:05:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:16.189 12:05:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:16.189 12:05:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:16.189 12:05:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:16.189 12:05:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:16.189 12:05:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:16.189 12:05:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:16.189 12:05:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@25 -- # sn=401276608 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 401276608 == \4\0\1\2\7\6\6\0\8 ]] 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 401276608 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:16.449 12:05:05 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.449 Running I/O for 1 seconds... 00:28:17.864 00:28:17.864 Latency(us) 00:28:17.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.864 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:17.864 nvme0n1 : 1.02 8718.24 34.06 0.00 0.00 14558.52 4029.25 18252.99 00:28:17.864 =================================================================================================================== 00:28:17.864 Total : 8718.24 34.06 0.00 0.00 14558.52 4029.25 18252.99 00:28:17.864 0 00:28:17.864 12:05:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:17.864 12:05:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:17.864 12:05:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:17.864 12:05:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:17.865 12:05:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:17.865 12:05:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:17.865 12:05:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.865 12:05:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:18.122 12:05:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:18.122 12:05:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:18.122 12:05:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:18.122 12:05:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.122 12:05:07 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:18.123 12:05:07 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.123 12:05:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.381 [2024-07-12 12:05:07.649281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:18.381 [2024-07-12 12:05:07.650070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a33b00 (107): Transport endpoint is not connected 00:28:18.381 [2024-07-12 12:05:07.651063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a33b00 (9): Bad file descriptor 00:28:18.381 [2024-07-12 12:05:07.652062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:18.381 [2024-07-12 12:05:07.652087] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:18.381 [2024-07-12 12:05:07.652101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:18.381 request: 00:28:18.381 { 00:28:18.381 "name": "nvme0", 00:28:18.381 "trtype": "tcp", 00:28:18.381 "traddr": "127.0.0.1", 00:28:18.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:18.381 "adrfam": "ipv4", 00:28:18.381 "trsvcid": "4420", 00:28:18.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.381 "psk": ":spdk-test:key1", 00:28:18.381 "method": "bdev_nvme_attach_controller", 00:28:18.381 "req_id": 1 00:28:18.381 } 00:28:18.381 Got JSON-RPC error response 00:28:18.381 response: 00:28:18.381 { 00:28:18.381 "code": -5, 00:28:18.381 "message": "Input/output error" 00:28:18.381 } 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@33 -- # sn=401276608 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 401276608 00:28:18.381 1 links removed 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@33 -- # sn=861543083 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 861543083 00:28:18.381 1 links removed 00:28:18.381 12:05:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1060257 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1060257 ']' 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1060257 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1060257 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1060257' 00:28:18.381 killing process with pid 1060257 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@968 -- # kill 1060257 00:28:18.381 Received shutdown signal, test time was about 1.000000 seconds 00:28:18.381 00:28:18.381 Latency(us) 00:28:18.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.381 =================================================================================================================== 00:28:18.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.381 12:05:07 keyring_linux -- common/autotest_common.sh@973 -- # wait 1060257 00:28:18.641 12:05:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1060128 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1060128 ']' 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1060128 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1060128 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:18.641 12:05:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:18.641 12:05:08 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1060128' 00:28:18.641 killing process with pid 1060128 00:28:18.641 12:05:08 keyring_linux -- common/autotest_common.sh@968 -- # kill 1060128 00:28:18.641 12:05:08 keyring_linux -- common/autotest_common.sh@973 -- # wait 1060128 00:28:19.209 00:28:19.209 real 0m5.734s 00:28:19.209 user 0m11.250s 00:28:19.209 sys 0m1.652s 00:28:19.209 12:05:08 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:19.209 12:05:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:19.209 ************************************ 00:28:19.209 END TEST keyring_linux 00:28:19.209 ************************************ 00:28:19.209 12:05:08 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:19.209 12:05:08 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:19.209 12:05:08 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:19.209 12:05:08 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:19.209 12:05:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:19.209 12:05:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:19.209 12:05:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:19.209 12:05:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:19.209 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.209 12:05:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:19.209 12:05:08 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:28:19.209 12:05:08 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:28:19.209 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:28:21.107 INFO: APP EXITING 00:28:21.107 INFO: killing all VMs 00:28:21.107 INFO: killing vhost app 00:28:21.107 INFO: EXIT DONE 00:28:22.040 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:22.040 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:22.040 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:22.040 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:22.040 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:22.040 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:22.040 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:22.040 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:22.040 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:22.040 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:22.040 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:22.040 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:22.040 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:22.040 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:22.040 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:22.040 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:22.040 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:23.413 Cleaning 00:28:23.413 Removing: /var/run/dpdk/spdk0/config 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:23.413 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:23.413 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:23.413 Removing: /var/run/dpdk/spdk1/config 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:23.413 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:23.413 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:23.413 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:23.413 Removing: /var/run/dpdk/spdk2/config 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:23.413 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:23.413 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:23.413 Removing: /var/run/dpdk/spdk3/config 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:23.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:23.671 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:23.671 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:23.671 Removing: /var/run/dpdk/spdk4/config 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:23.671 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:23.671 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:23.671 Removing: /dev/shm/bdev_svc_trace.1 00:28:23.671 Removing: /dev/shm/nvmf_trace.0 00:28:23.671 Removing: /dev/shm/spdk_tgt_trace.pid800421 00:28:23.671 Removing: /var/run/dpdk/spdk0 00:28:23.671 Removing: /var/run/dpdk/spdk1 00:28:23.671 Removing: /var/run/dpdk/spdk2 00:28:23.671 Removing: /var/run/dpdk/spdk3 00:28:23.671 Removing: /var/run/dpdk/spdk4 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1003964 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1009062 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1009064 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1021390 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1021933 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1022465 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1023008 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1023610 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1024148 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1024562 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1025078 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1027587 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1027741 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1031582 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1031690 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1033302 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1038258 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1038341 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1041452 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1043260 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1044662 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1045402 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1046809 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1047679 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1052906 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1053230 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1053622 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1055169 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1055570 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1055935 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1058299 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1058303 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1059766 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1060128 00:28:23.671 Removing: /var/run/dpdk/spdk_pid1060257 00:28:23.671 Removing: /var/run/dpdk/spdk_pid798882 00:28:23.671 Removing: /var/run/dpdk/spdk_pid799605 00:28:23.671 Removing: /var/run/dpdk/spdk_pid800421 00:28:23.671 Removing: /var/run/dpdk/spdk_pid800861 00:28:23.671 Removing: /var/run/dpdk/spdk_pid801554 00:28:23.671 Removing: /var/run/dpdk/spdk_pid801694 00:28:23.671 Removing: /var/run/dpdk/spdk_pid802411 00:28:23.671 Removing: /var/run/dpdk/spdk_pid802418 00:28:23.671 Removing: /var/run/dpdk/spdk_pid802662 00:28:23.671 Removing: /var/run/dpdk/spdk_pid803971 00:28:23.671 Removing: /var/run/dpdk/spdk_pid804912 00:28:23.671 Removing: /var/run/dpdk/spdk_pid805215 00:28:23.671 Removing: /var/run/dpdk/spdk_pid805401 00:28:23.671 Removing: /var/run/dpdk/spdk_pid805607 00:28:23.671 Removing: /var/run/dpdk/spdk_pid805798 00:28:23.671 Removing: /var/run/dpdk/spdk_pid805955 00:28:23.671 Removing: /var/run/dpdk/spdk_pid806123 00:28:23.671 Removing: /var/run/dpdk/spdk_pid806316 00:28:23.671 Removing: /var/run/dpdk/spdk_pid806912 00:28:23.671 Removing: /var/run/dpdk/spdk_pid809741 00:28:23.671 Removing: /var/run/dpdk/spdk_pid810129 00:28:23.671 Removing: /var/run/dpdk/spdk_pid810296 00:28:23.671 Removing: /var/run/dpdk/spdk_pid810307 00:28:23.671 Removing: /var/run/dpdk/spdk_pid810731 00:28:23.672 Removing: /var/run/dpdk/spdk_pid810757 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811172 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811308 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811481 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811614 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811783 00:28:23.672 Removing: /var/run/dpdk/spdk_pid811910 00:28:23.672 Removing: /var/run/dpdk/spdk_pid812273 00:28:23.672 Removing: /var/run/dpdk/spdk_pid812439 00:28:23.672 Removing: /var/run/dpdk/spdk_pid812664 00:28:23.672 Removing: /var/run/dpdk/spdk_pid812817 00:28:23.672 Removing: /var/run/dpdk/spdk_pid812944 00:28:23.672 Removing: /var/run/dpdk/spdk_pid813068 00:28:23.672 Removing: /var/run/dpdk/spdk_pid813287 00:28:23.672 Removing: /var/run/dpdk/spdk_pid813448 00:28:23.672 Removing: /var/run/dpdk/spdk_pid813704 00:28:23.672 Removing: /var/run/dpdk/spdk_pid813881 00:28:23.672 Removing: /var/run/dpdk/spdk_pid814036 00:28:23.672 Removing: /var/run/dpdk/spdk_pid814278 00:28:23.672 Removing: /var/run/dpdk/spdk_pid814469 00:28:23.672 Removing: /var/run/dpdk/spdk_pid814625 00:28:23.672 Removing: /var/run/dpdk/spdk_pid814904 00:28:23.672 Removing: /var/run/dpdk/spdk_pid815056 00:28:23.672 Removing: /var/run/dpdk/spdk_pid815216 00:28:23.672 Removing: /var/run/dpdk/spdk_pid815491 00:28:23.672 Removing: /var/run/dpdk/spdk_pid815651 00:28:23.929 Removing: /var/run/dpdk/spdk_pid815808 00:28:23.929 Removing: /var/run/dpdk/spdk_pid816085 00:28:23.929 Removing: /var/run/dpdk/spdk_pid816243 00:28:23.929 Removing: /var/run/dpdk/spdk_pid816399 00:28:23.929 Removing: /var/run/dpdk/spdk_pid816679 00:28:23.929 Removing: /var/run/dpdk/spdk_pid816836 00:28:23.929 Removing: /var/run/dpdk/spdk_pid817015 00:28:23.929 Removing: /var/run/dpdk/spdk_pid817182 00:28:23.929 Removing: /var/run/dpdk/spdk_pid817515 00:28:23.929 Removing: /var/run/dpdk/spdk_pid819572 00:28:23.929 Removing: /var/run/dpdk/spdk_pid845591 00:28:23.929 Removing: /var/run/dpdk/spdk_pid848720 00:28:23.929 Removing: /var/run/dpdk/spdk_pid855629 00:28:23.929 Removing: /var/run/dpdk/spdk_pid858852 00:28:23.929 Removing: /var/run/dpdk/spdk_pid861178 00:28:23.929 Removing: /var/run/dpdk/spdk_pid861599 00:28:23.929 Removing: /var/run/dpdk/spdk_pid868799 00:28:23.929 Removing: /var/run/dpdk/spdk_pid868851 00:28:23.929 Removing: /var/run/dpdk/spdk_pid869395 00:28:23.929 Removing: /var/run/dpdk/spdk_pid870048 00:28:23.929 Removing: /var/run/dpdk/spdk_pid870684 00:28:23.929 Removing: /var/run/dpdk/spdk_pid871066 00:28:23.929 Removing: /var/run/dpdk/spdk_pid871115 00:28:23.929 Removing: /var/run/dpdk/spdk_pid871259 00:28:23.929 Removing: /var/run/dpdk/spdk_pid871391 00:28:23.929 Removing: /var/run/dpdk/spdk_pid871396 00:28:23.929 Removing: /var/run/dpdk/spdk_pid872051 00:28:23.929 Removing: /var/run/dpdk/spdk_pid872592 00:28:23.929 Removing: /var/run/dpdk/spdk_pid873254 00:28:23.929 Removing: /var/run/dpdk/spdk_pid873650 00:28:23.929 Removing: /var/run/dpdk/spdk_pid873778 00:28:23.929 Removing: /var/run/dpdk/spdk_pid873918 00:28:23.929 Removing: /var/run/dpdk/spdk_pid874933 00:28:23.929 Removing: /var/run/dpdk/spdk_pid875654 00:28:23.929 Removing: /var/run/dpdk/spdk_pid881633 00:28:23.929 Removing: /var/run/dpdk/spdk_pid881912 00:28:23.929 Removing: /var/run/dpdk/spdk_pid884415 00:28:23.929 Removing: /var/run/dpdk/spdk_pid888114 00:28:23.929 Removing: /var/run/dpdk/spdk_pid890166 00:28:23.929 Removing: /var/run/dpdk/spdk_pid896424 00:28:23.929 Removing: /var/run/dpdk/spdk_pid901625 00:28:23.929 Removing: /var/run/dpdk/spdk_pid902939 00:28:23.929 Removing: /var/run/dpdk/spdk_pid903608 00:28:23.929 Removing: /var/run/dpdk/spdk_pid914429 00:28:23.929 Removing: /var/run/dpdk/spdk_pid916635 00:28:23.929 Removing: /var/run/dpdk/spdk_pid941720 00:28:23.929 Removing: /var/run/dpdk/spdk_pid944510 00:28:23.929 Removing: /var/run/dpdk/spdk_pid945688 00:28:23.929 Removing: /var/run/dpdk/spdk_pid947008 00:28:23.929 Removing: /var/run/dpdk/spdk_pid947136 00:28:23.929 Removing: /var/run/dpdk/spdk_pid947157 00:28:23.929 Removing: /var/run/dpdk/spdk_pid947296 00:28:23.929 Removing: /var/run/dpdk/spdk_pid947864 00:28:23.929 Removing: /var/run/dpdk/spdk_pid949061 00:28:23.929 Removing: /var/run/dpdk/spdk_pid949914 00:28:23.929 Removing: /var/run/dpdk/spdk_pid950224 00:28:23.929 Removing: /var/run/dpdk/spdk_pid951840 00:28:23.929 Removing: /var/run/dpdk/spdk_pid952388 00:28:23.929 Removing: /var/run/dpdk/spdk_pid952831 00:28:23.929 Removing: /var/run/dpdk/spdk_pid955352 00:28:23.929 Removing: /var/run/dpdk/spdk_pid961251 00:28:23.929 Removing: /var/run/dpdk/spdk_pid963932 00:28:23.929 Removing: /var/run/dpdk/spdk_pid967855 00:28:23.929 Removing: /var/run/dpdk/spdk_pid969350 00:28:23.929 Removing: /var/run/dpdk/spdk_pid970597 00:28:23.929 Removing: /var/run/dpdk/spdk_pid973272 00:28:23.929 Removing: /var/run/dpdk/spdk_pid975548 00:28:23.929 Removing: /var/run/dpdk/spdk_pid979844 00:28:23.929 Removing: /var/run/dpdk/spdk_pid979852 00:28:23.929 Removing: /var/run/dpdk/spdk_pid982619 00:28:23.929 Removing: /var/run/dpdk/spdk_pid982873 00:28:23.929 Removing: /var/run/dpdk/spdk_pid983013 00:28:23.929 Removing: /var/run/dpdk/spdk_pid983278 00:28:23.929 Removing: /var/run/dpdk/spdk_pid983283 00:28:23.929 Removing: /var/run/dpdk/spdk_pid986047 00:28:23.929 Removing: /var/run/dpdk/spdk_pid986382 00:28:23.929 Removing: /var/run/dpdk/spdk_pid989038 00:28:23.929 Removing: /var/run/dpdk/spdk_pid991006 00:28:23.929 Removing: /var/run/dpdk/spdk_pid994309 00:28:23.929 Removing: /var/run/dpdk/spdk_pid997626 00:28:23.929 Clean 00:28:24.186 12:05:13 -- common/autotest_common.sh@1450 -- # return 0 00:28:24.186 12:05:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:24.186 12:05:13 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:24.186 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:28:24.186 12:05:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:24.186 12:05:13 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:24.186 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:28:24.186 12:05:13 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:24.186 12:05:13 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:24.186 12:05:13 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:24.186 12:05:13 -- spdk/autotest.sh@391 -- # hash lcov 00:28:24.186 12:05:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:24.186 12:05:13 -- spdk/autotest.sh@393 -- # hostname 00:28:24.186 12:05:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:24.186 geninfo: WARNING: invalid characters removed from testname! 00:28:56.266 12:05:41 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:56.266 12:05:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:59.561 12:05:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:02.099 12:05:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:05.428 12:05:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:07.962 12:05:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:11.256 12:06:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:11.256 12:06:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.256 12:06:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:11.256 12:06:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.256 12:06:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.256 12:06:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.256 12:06:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.256 12:06:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.256 12:06:00 -- paths/export.sh@5 -- $ export PATH 00:29:11.256 12:06:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.256 12:06:00 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:11.256 12:06:00 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:11.256 12:06:00 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720778760.XXXXXX 00:29:11.256 12:06:00 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720778760.hkADrR 00:29:11.256 12:06:00 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:11.256 12:06:00 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:11.256 12:06:00 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:11.256 12:06:00 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:11.256 12:06:00 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:11.256 12:06:00 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:11.256 12:06:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:11.256 12:06:00 -- common/autotest_common.sh@10 -- $ set +x 00:29:11.256 12:06:00 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:11.256 12:06:00 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:11.256 12:06:00 -- pm/common@17 -- $ local monitor 00:29:11.256 12:06:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.256 12:06:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.256 12:06:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.256 12:06:00 -- pm/common@21 -- $ date +%s 00:29:11.256 12:06:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.256 12:06:00 -- pm/common@21 -- $ date +%s 00:29:11.256 12:06:00 -- pm/common@25 -- $ sleep 1 00:29:11.256 12:06:00 -- pm/common@21 -- $ date +%s 00:29:11.256 12:06:00 -- pm/common@21 -- $ date +%s 00:29:11.256 12:06:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720778760 00:29:11.256 12:06:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720778760 00:29:11.256 12:06:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720778760 00:29:11.256 12:06:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720778760 00:29:11.256 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720778760_collect-vmstat.pm.log 00:29:11.256 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720778760_collect-cpu-load.pm.log 00:29:11.256 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720778760_collect-cpu-temp.pm.log 00:29:11.256 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720778760_collect-bmc-pm.bmc.pm.log 00:29:11.825 12:06:01 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:11.825 12:06:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:11.825 12:06:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:11.825 12:06:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:11.825 12:06:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:11.825 12:06:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:11.825 12:06:01 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:11.825 12:06:01 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:11.825 12:06:01 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:11.825 12:06:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:11.825 12:06:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:11.825 12:06:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:11.825 12:06:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:11.825 12:06:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.825 12:06:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:11.825 12:06:01 -- pm/common@44 -- $ pid=1070077 00:29:11.825 12:06:01 -- pm/common@50 -- $ kill -TERM 1070077 00:29:11.825 12:06:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.825 12:06:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:11.825 12:06:01 -- pm/common@44 -- $ pid=1070079 00:29:11.825 12:06:01 -- pm/common@50 -- $ kill -TERM 1070079 00:29:11.825 12:06:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.825 12:06:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:11.825 12:06:01 -- pm/common@44 -- $ pid=1070081 00:29:11.825 12:06:01 -- pm/common@50 -- $ kill -TERM 1070081 00:29:11.825 12:06:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.825 12:06:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:11.825 12:06:01 -- pm/common@44 -- $ pid=1070113 00:29:11.825 12:06:01 -- pm/common@50 -- $ sudo -E kill -TERM 1070113 00:29:11.825 + [[ -n 715169 ]] 00:29:11.825 + sudo kill 715169 00:29:12.093 [Pipeline] } 00:29:12.112 [Pipeline] // stage 00:29:12.117 [Pipeline] } 00:29:12.135 [Pipeline] // timeout 00:29:12.141 [Pipeline] } 00:29:12.158 [Pipeline] // catchError 00:29:12.163 [Pipeline] } 00:29:12.181 [Pipeline] // wrap 00:29:12.188 [Pipeline] } 00:29:12.205 [Pipeline] // catchError 00:29:12.215 [Pipeline] stage 00:29:12.217 [Pipeline] { (Epilogue) 00:29:12.233 [Pipeline] catchError 00:29:12.235 [Pipeline] { 00:29:12.250 [Pipeline] echo 00:29:12.252 Cleanup processes 00:29:12.259 [Pipeline] sh 00:29:12.546 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:12.546 1070233 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:12.546 1070405 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:12.561 [Pipeline] sh 00:29:12.848 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:12.848 ++ grep -v 'sudo pgrep' 00:29:12.848 ++ awk '{print $1}' 00:29:12.848 + sudo kill -9 1070233 00:29:12.860 [Pipeline] sh 00:29:13.146 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:21.262 [Pipeline] sh 00:29:21.547 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:21.547 Artifacts sizes are good 00:29:21.561 [Pipeline] archiveArtifacts 00:29:21.567 Archiving artifacts 00:29:21.790 [Pipeline] sh 00:29:22.077 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:22.093 [Pipeline] cleanWs 00:29:22.103 [WS-CLEANUP] Deleting project workspace... 00:29:22.103 [WS-CLEANUP] Deferred wipeout is used... 00:29:22.110 [WS-CLEANUP] done 00:29:22.112 [Pipeline] } 00:29:22.132 [Pipeline] // catchError 00:29:22.145 [Pipeline] sh 00:29:22.424 + logger -p user.info -t JENKINS-CI 00:29:22.432 [Pipeline] } 00:29:22.449 [Pipeline] // stage 00:29:22.454 [Pipeline] } 00:29:22.471 [Pipeline] // node 00:29:22.477 [Pipeline] End of Pipeline 00:29:22.511 Finished: SUCCESS